997 resultados para Hilbert Series
Resumo:
This chapter applies rigorous statistical analysis to existing datasets of medieval exchange rates quoted in merchants’ letters sent from Barcelona, Bruges and Venice between 1380 and 1310, which survive in the archive of Francesco di Marco Datini of Prato. First, it tests the exchange rates for stationarity. Second, it uses regression analysis to examine the seasonality of exchange rates at the three financial centres and compares them against contemporary descriptions by the merchant Giovanni di Antonio da Uzzano. Third, it tests for structural breaks in the exchange rate series.
Resumo:
This paper examines the cyclical regularities of macroeconomic, financial and property market aggregates in relation to the property stock price cycle in the UK. The Hodrick Prescott filter is employed to fit a long-term trend to the raw data, and to derive the short-term cycles of each series. It is found that the cycles of consumer expenditure, total consumption per capita, the dividend yield and the long-term bond yield are moderately correlated, and mainly coincident, with the property price cycle. There is also evidence that the nominal and real Treasury Bill rates and the interest rate spread lead this cycle by one or two quarters, and therefore that these series can be considered leading indicators of property stock prices. This study recommends that macroeconomic and financial variables can provide useful information to explain and potentially to forecast movements of property-backed stock returns in the UK.
Resumo:
Many key economic and financial series are bounded either by construction or through policy controls. Conventional unit root tests are potentially unreliable in the presence of bounds, since they tend to over-reject the null hypothesis of a unit root, even asymptotically. So far, very little work has been undertaken to develop unit root tests which can be applied to bounded time series. In this paper we address this gap in the literature by proposing unit root tests which are valid in the presence of bounds. We present new augmented Dickey–Fuller type tests as well as new versions of the modified ‘M’ tests developed by Ng and Perron [Ng, S., Perron, P., 2001. LAG length selection and the construction of unit root tests with good size and power. Econometrica 69, 1519–1554] and demonstrate how these tests, combined with a simulation-based method to retrieve the relevant critical values, make it possible to control size asymptotically. A Monte Carlo study suggests that the proposed tests perform well in finite samples. Moreover, the tests outperform the Phillips–Perron type tests originally proposed in Cavaliere [Cavaliere, G., 2005. Limited time series with a unit root. Econometric Theory 21, 907–945]. An illustrative application to U.S. interest rate data is provided
Resumo:
Climate data are used in a number of applications including climate risk management and adaptation to climate change. However, the availability of climate data, particularly throughout rural Africa, is very limited. Available weather stations are unevenly distributed and mainly located along main roads in cities and towns. This imposes severe limitations to the availability of climate information and services for the rural community where, arguably, these services are needed most. Weather station data also suffer from gaps in the time series. Satellite proxies, particularly satellite rainfall estimate, have been used as alternatives because of their availability even over remote parts of the world. However, satellite rainfall estimates also suffer from a number of critical shortcomings that include heterogeneous time series, short time period of observation, and poor accuracy particularly at higher temporal and spatial resolutions. An attempt is made here to alleviate these problems by combining station measurements with the complete spatial coverage of satellite rainfall estimates. Rain gauge observations are merged with a locally calibrated version of the TAMSAT satellite rainfall estimates to produce over 30-years (1983-todate) of rainfall estimates over Ethiopia at a spatial resolution of 10 km and a ten-daily time scale. This involves quality control of rain gauge data, generating locally calibrated version of the TAMSAT rainfall estimates, and combining these with rain gauge observations from national station network. The infrared-only satellite rainfall estimates produced using a relatively simple TAMSAT algorithm performed as good as or even better than other satellite rainfall products that use passive microwave inputs and more sophisticated algorithms. There is no substantial difference between the gridded-gauge and combined gauge-satellite products over the test area in Ethiopia having a dense station network; however, the combined product exhibits better quality over parts of the country where stations are sparsely distributed.
Resumo:
African societies are dependent on rainfall for agricultural and other water-dependent activities, yet rainfall is extremely variable in both space and time and reoccurring water shocks, such as drought, can have considerable social and economic impacts. To help improve our knowledge of the rainfall climate, we have constructed a 30-year (1983–2012), temporally consistent rainfall dataset for Africa known as TARCAT (TAMSAT African Rainfall Climatology And Time-series) using archived Meteosat thermal infra-red (TIR) imagery, calibrated against rain gauge records collated from numerous African agencies. TARCAT has been produced at 10-day (dekad) scale at a spatial resolution of 0.0375°. An intercomparison of TARCAT from 1983 to 2010 with six long-term precipitation datasets indicates that TARCAT replicates the spatial and seasonal rainfall patterns and interannual variability well, with correlation coefficients of 0.85 and 0.70 with the Climate Research Unit (CRU) and Global Precipitation Climatology Centre (GPCC) gridded-gauge analyses respectively in the interannual variability of the Africa-wide mean monthly rainfall. The design of the algorithm for drought monitoring leads to TARCAT underestimating the Africa-wide mean annual rainfall on average by −0.37 mm day−1 (21%) compared to other datasets. As the TARCAT rainfall estimates are historically calibrated across large climatically homogeneous regions, the data can provide users with robust estimates of climate related risk, even in regions where gauge records are inconsistent in time.
Resumo:
This paper provides an overview of interpolation of Banach and Hilbert spaces, with a focus on establishing when equivalence of norms is in fact equality of norms in the key results of the theory. (In brief, our conclusion for the Hilbert space case is that, with the right normalisations, all the key results hold with equality of norms.) In the final section we apply the Hilbert space results to the Sobolev spaces Hs(Ω) and tildeHs(Ω), for s in R and an open Ω in R^n. We exhibit examples in one and two dimensions of sets Ω for which these scales of Sobolev spaces are not interpolation scales. In the cases when they are interpolation scales (in particular, if Ω is Lipschitz) we exhibit examples that show that, in general, the interpolation norm does not coincide with the intrinsic Sobolev norm and, in fact, the ratio of these two norms can be arbitrarily large.
Resumo:
Petasis and Ugi reactions are used successively without intermediate purification, effectively accomplishing a six-component reaction. The examined reactions are transferred from traditional batch reactors to an automated continuous flow microreactor setup, where optimization and kinetic analyses are performed, proposed mechanisms evaluated, and rate-limiting steps determined.
Resumo:
Possible future changes of clustering and return periods (RPs) of European storm series with high potential losses are quantified. Historical storm series are identified using 40 winters of reanalysis. Time series of top events (1, 2 or 5 year return levels (RLs)) are used to assess RPs of storm series both empirically and theoretically. Additionally, 800 winters of general circulation model simulations for present (1960–2000) and future (2060–2100) climate conditions are investigated. Clustering is identified for most countries, and estimated RPs are similar for reanalysis and present day simulations. Future changes of RPs are estimated for fixed RLs and fixed loss index thresholds. For the former, shorter RPs are found for Western Europe, but changes are small and spatially heterogeneous. For the latter, which combines the effects of clustering and event ranking shifts, shorter RPs are found everywhere except for Mediterranean countries. These changes are generally not statistically significant between recent and future climate. However, the RPs for the fixed loss index approach are mostly beyond the range of pre-industrial natural climate variability. This is not true for fixed RLs. The quantification of losses associated with storm series permits a more adequate windstorm risk assessment in a changing climate.
Resumo:
Empirical Mode Decomposition is presented as an alternative to traditional analysis methods to decompose geomagnetic time series into spectral components. Important comments on the algorithm and its variations will be given. Using this technique, planetary wave modes of 5-, 10-, and 16-day mean periods can be extracted from magnetic field components of three different stations in Germany. In a second step, the amplitude modulation functions of these wave modes can be shown to contain significant contribution from solar cycle variation through correlation with smoothed sunspot numbers. Additionally, the data indicate connections with geomagnetic jerk occurrences, supported by a second set of data providing reconstructed near-Earth magnetic field for 150 years. Usually attributed to internal dynamo processes within the Earth's outer core, the question of who is impacting whom will be briefly discussed here.
Resumo:
Industrial robotic manipulators can be found in most factories today. Their tasks are accomplished through actively moving, placing and assembling parts. This movement is facilitated by actuators that apply a torque in response to a command signal. The presence of friction and possibly backlash have instigated the development of sophisticated compensation and control methods in order to achieve the desired performance may that be accurate motion tracking, fast movement or in fact contact with the environment. This thesis presents a dual drive actuator design that is capable of physically linearising friction and hence eliminating the need for complex compensation algorithms. A number of mathematical models are derived that allow for the simulation of the actuator dynamics. The actuator may be constructed using geared dc motors, in which case the benefits of torque magnification is retained whilst the increased non-linear friction effects are also linearised. An additional benefit of the actuator is the high quality, low latency output position signal provided by the differencing of the two drive positions. Due to this and the linearised nature of friction, the actuator is well suited for low velocity, stop-start applications, micro-manipulation and even in hard-contact tasks. There are, however, disadvantages to its design. When idle, the device uses power whilst many other, single drive actuators do not. Also the complexity of the models mean that parameterisation is difficult. Management of start-up conditions still pose a challenge.
Resumo:
A new series of non-stoichiometric sulfides Ga1−xGexV4S8−δ (0≤x≤1; δ≤0.23) has been synthesized at high temperatures by heating stoichiometric mixtures of the elements in sealed quartz tubes. The samples have been characterized by powder X-ray diffraction, SQUID magnetometry and electrical transport-property measurements. Structural analysis reveals that a solid solution is formed throughout this composition range, whilst thermogravimetric data reveal sulfur deficiency of up to 2.9% in the quaternary phases. Magnetic measurements suggest that the ferromagnetic behavior of the end-member phase GaV4S8 is retained at x≤0.7; samples in this composition range showing a marked increase in magnetization at low temperatures. By contrast Ga0.25Ge0.75V4S8−δ appears to undergo antiferromagnetic ordering at ca. 15 K. All materials with x≠1 are n-type semiconductors whose resistivity falls by almost six orders of magnitude with decreasing Ga content, whilst the end-member phase GeV4S8−δ is a p-type semiconductor. The results demonstrate that the physical properties are determined principally by the degree of electron filling of narrow-band states arising from intracluster V–V interactions.
Resumo:
The Arctic is an important region in the study of climate change, but monitoring surface temperatures in this region is challenging, particularly in areas covered by sea ice. Here in situ, satellite and reanalysis data were utilised to investigate whether global warming over recent decades could be better estimated by changing the way the Arctic is treated in calculating global mean temperature. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques. Kriging techniques provided the smallest errors in anomaly estimates. Similar accuracies were found for anomalies estimated from in situ meteorological station SAT records using a kriging technique. Whether additional data sources, which are not currently utilised in temperature anomaly datasets, would improve estimates of Arctic surface air temperature anomalies was investigated within the reanalysis testbed and using in situ data. For the reanalysis study, the additional input anomalies were reanalysis data sampled at certain supplementary data source locations over Arctic land and sea ice areas. For the in situ data study, the additional input anomalies over sea ice were surface temperature anomalies derived from the Advanced Very High Resolution Radiometer satellite instruments. The use of additional data sources, particularly those located in the Arctic Ocean over sea ice or on islands in sparsely observed regions, can lead to substantial improvements in the accuracy of estimated anomalies. Decreases in Root Mean Square Error can be up to 0.2K for Arctic-average anomalies and more than 1K for spatially resolved anomalies. Further improvements in accuracy may be accomplished through the use of other data sources.
Resumo:
A unique series of oligomeric ellagitannins was used to study their interactions with bovine serum albumin (BSA) by isothermal titration calorimetry. Oligomeric ellagitannins, ranging from monomer to heptamer and a mixture of octamer–undecamers, were isolated as individual pure compounds. This series allowed studying the effects of oligomer size and other structural features. The monomeric to trimeric ellagitannins deviated most from the overall trends. The interactions of ellagitannin oligomers from tetramers to octa–undecamers with BSA revealed strong similarities. In contrast to the equilibrium binding constant, enthalpy showed an increasing trend from the dimer to larger oligomers. It is likely that first the macrocyclic part of the ellagitannin binds to the defined binding sites on the protein surface and then the “flexible tail” of the ellagitannin coats the protein surface. The results highlight the importance of molecular flexibility to maximize binding between the ellagitannin and protein surfaces.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.