935 resultados para Irregularly spaced returns
Resumo:
Multi-factor models constitute a useful tool to explain cross-sectional covariance in equities returns. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide interval forecasts.
Resumo:
Multi-factor models constitute a use fui tool to explain cross-sectional covariance in equities retums. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide intervaI forecasts.
Resumo:
In this paper, we present approximate distributions for the ratio of the cumulative wavelet periodograms considering stationary and non-stationary time series generated from independent Gaussian processes. We also adapt an existing procedure to use this statistic and its approximate distribution in order to test if two regularly or irregularly spaced time series are realizations of the same generating process. Simulation studies show good size and power properties for the test statistic. An application with financial microdata illustrates the test usefulness. We conclude advocating the use of these approximate distributions instead of the ones obtained through randomizations, mainly in the case of irregular time series. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Digital terrain models (DTM) typically contain large numbers of postings, from hundreds of thousands to billions. Many algorithms that run on DTMs require topological knowledge of the postings, such as finding nearest neighbors, finding the posting closest to a chosen location, etc. If the postings are arranged irregu- larly, topological information is costly to compute and to store. This paper offers a practical approach to organizing and searching irregularly-space data sets by presenting a collection of efficient algorithms (O(N),O(lgN)) that compute important topological relationships with only a simple supporting data structure. These relationships include finding the postings within a window, locating the posting nearest a point of interest, finding the neighborhood of postings nearest a point of interest, and ordering the neighborhood counter-clockwise. These algorithms depend only on two sorted arrays of two-element tuples, holding a planimetric coordinate and an integer identification number indicating which posting the coordinate belongs to. There is one array for each planimetric coordinate (eastings and northings). These two arrays cost minimal overhead to create and store but permit the data to remain arranged irregularly.
Resumo:
This paper develops a framework to test whether discrete-valued irregularly-spaced financial transactions data follow a subordinated Markov process. For that purpose, we consider a specific optional sampling in which a continuous-time Markov process is observed only when it crosses some discrete level. This framework is convenient for it accommodates not only the irregular spacing of transactions data, but also price discreteness. Further, it turns out that, under such an observation rule, the current price duration is independent of previous price durations given the current price realization. A simple nonparametric test then follows by examining whether this conditional independence property holds. Finally, we investigate whether or not bid-ask spreads follow Markov processes using transactions data from the New York Stock Exchange. The motivation lies on the fact that asymmetric information models of market microstructures predict that the Markov property does not hold for the bid-ask spread. The results are mixed in the sense that the Markov assumption is rejected for three out of the five stocks we have analyzed.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.
Resumo:
Jakobshavns Isbrae (69 degrees 10'N, 49 degrees 5'W) drains about 6.5% of the Greenland ice sheet and is the fastest ice stream known. The Jakobshavns Isbrae basin of about 10 000 km(2) was mapped photogrammetrically from four sets of aerial photography, two taken in July 1985 and two in July 1986. Positions and elevations of several hundred natural features on the ice surface were determined for each epoch by photogrammetric block-aerial triangulation, and surface velocity vectors were computed from the positions. The two flights in 1985 yielded the best results and provided most common points (716) for velocity determinations and are therefore used in the modeling studies. The data from these irregularly spaced points were used to calculate ice elevations and velocity vectors at uniformly spaced grid paints 3 km apart by interpolation. The field of surface strain rates was then calculated from these gridded data and used to compute the field of surface deviatoric stresses, using the flow law of ice, for rectilinear coordinates, X, Y pointing eastward and northward. and curvilinear coordinates, L, T pointing longitudinally and transversely to the changing ice-flow direction. Ice-surface elevations and slopes were then used to calculate ice thicknesses and the fraction of the ice velocity due to basal sliding. Our calculated ice thicknesses are in fair agreement with an ice-thickness map based on seismic sounding and supplied to us by K. Echelmeyer. Ice thicknesses were subtracted from measured ice-surface elevations to map bed topography. Our calculation shows that basal sliding is significant only in the 10-15 km before Jakobshavns Isbrae becomes afloat in Jakobshavns IsfJord.
Resumo:
Calcareous nannofossils were studied by light microscopy in Neogene sedimentary rocks recovered at four sites of the Ocean Drilling Program Leg 127 in the Japan Sea. Nannofossils occur sporadically at all sites, and allow recognition of seven zones and two subzones; four zones in the Holocene to the uppermost Pliocene, and three zones and two subzones in the middle to lower Miocene. Forty-eight nannofossil species are recognized in 95 of the 808 irregularly-spaced samples taken from all the sites. The nannofossil assemblages in the Miocene are more diverse than those in the Holocene to Pliocene sedimentary interval. The greater diversity and the presence of warm-water taxa, such as Sphenolithus and discoasters in the upper lower Miocene to lower middle Miocene, suggest a relatively warm and stable surface-water condition, attributed to an increased supply of warm water from the subtropical western Pacific Ocean. Site 797 in the southern part of the Yamato Basin contains the most complete and the oldest nannofossil record so far reported from the Japan Sea. The lowermost nannofossil zone at this site, the Helicosphaera ampliaperta Zone (15.7-18.4 Ma) gives a minimum age for the Yamato Basin. This age range predates rotation of southwest Japan, an event previously believed to be caused by the opening of the Japan Sea.
Resumo:
An application of the Finite Element Method (FEM) to the solution of a geometric problem is shown. The problem is related to curve fitting i.e. pass a curve trough a set of given points even if they are irregularly spaced. Situations where cur ves with cusps can be encountered in the practice and therefore smooth interpolatting curves may be unsuitable. In this paper the possibilities of the FEM to deal with this type of problems are shown. A particular example of application to road planning is discussed. In this case the funcional to be minimized should express the unpleasent effects of the road traveller. Some comparative numerical examples are also given.
Resumo:
Yeast telomere DNA consists of a continuous, ≈330-bp tract of the heterogeneous repeat TG1-3 with irregularly spaced, high affinity sites for the protein Rap1p. Yeast monitor, or count, the number of telomeric Rap1p C termini in a negative feedback mechanism to modulate the length of the terminal TG1-3 repeats, and synthetic telomeres that tether Rap1p molecules adjacent to the TG1-3 tract cause wild-type cells to maintain a shorter TG1-3 tract. To identify trans-acting proteins required to count Rap1p molecules, these same synthetic telomeres were placed in two short telomere mutants: yku70Δ (which lack the yeast Ku70 protein) and tel1Δ (which lack the yeast ortholog of ATM). Although both mutants maintain telomeres with ≈100 bp of TG1-3, only yku70Δ cells maintained shorter TG1-3 repeats in response to internal Rap1p molecules. This distinct response to internal Rap1p molecules was not caused by a variation in Rap1p site density in the TG1-3 repeats as sequencing of tel1Δ and yku70Δ telomeres showed that both strains have only five to six Rap1p sites per 100-bp telomere. In addition, the tel1Δ short telomere phenotype was epistatic to the unregulated telomere length caused by deletion of the Rap1p C-terminal domain. Thus, the length of the TG1-3 repeats in tel1Δ cells was independent of the number of the Rap1p C termini at the telomere. These data indicate that tel1Δ cells use an alternative mechanism to regulate telomere length that is distinct from monitoring the number of telomere binding proteins.
Resumo:
Measurements of the sea surface obtained by satellite borne radar altimetry are irregularly spaced and contaminated with various modelling and correction errors. The largest source of uncertainty for low Earth orbiting satellites such as ERS-1 and Geosat may be attributed to orbital modelling errors. The empirical correction of such errors is investigated by examination of single and dual satellite crossovers, with a view to identifying the extent of any signal aliasing: either by removal of long wavelength ocean signals or introduction of additional error signals. From these studies, it was concluded that sinusoidal approximation of the dominant one cycle per revolution orbit error over arc lengths of 11,500 km did not remove a significant mesoscale ocean signal. The use of TOPEX/Poseidon dual crossovers with ERS-1 was shown to substantially improve the radial accuracy of ERS-1, except for some absorption of small TOPEX/Poseidon errors. The extraction of marine geoid information is of great interest to the oceanographic community and was the subject of the second half of this thesis. Firstly through determination of regional mean sea surfaces using Geosat data, it was demonstrated that a dataset with 70cm orbit error contamination could produce a marine geoid map which compares to better than 12cm with an accurate regional high resolution gravimetric geoid. This study was then developed into Optimal Fourier Transform Interpolation, a technique capable of analysing complete altimeter datasets for the determination of consistent global high resolution geoid maps. This method exploits the regular nature of ascending and descending data subsets thus making possible the application of fast Fourier transform algorithms. Quantitative assessment of this method was limited by the lack of global ground truth gravity data, but qualitative results indicate good signal recovery from a single 35-day cycle.
Resumo:
Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.