977 resultados para SERIES MODELS
Resumo:
The Lomb periodogram has been traditionally a tool that allows us to elucidate if a frequency turns out to be important for explaining the behaviour of a given time series. Many linear and nonlinear reiterative harmonic processes that are used for studying the spectral content of a time series take into account this periodogram in order to avoid including spurious frequencies in their models due to the leakage problem of energy from one frequency to others. However, the estimation of the periodogram requires long computation time that makes the harmonic analysis slower when we deal with certain time series. Here we propose an algorithm that accelerates the extraction of the most remarkable frequencies from the periodogram, avoiding its whole estimation of the harmonic process at each iteration. This algorithm allows the user to perform a specific analysis of a given scalar time series. As a result, we obtain a functional model made of (1) a trend component, (2) a linear combination of Fourier terms, and (3) the so-called mixed secular terms by reducing the computation time of the estimation of the periodogram.
Resumo:
In this study, we utilise a novel approach to segment out the ventricular system in a series of high resolution T1-weighted MR images. We present a brain ventricles fast reconstruction method. The method is based on the processing of brain sections and establishing a fixed number of landmarks onto those sections to reconstruct the ventricles 3D surface. Automated landmark extraction is accomplished through the use of the self-organising network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates the classical surface reconstruction and filtering processes. The proposed method offers higher accuracy compared to methods with similar efficiency as Voxel Grid.
Resumo:
Two predictive models are developed in this article: the first is designed to predict people's attitudes to alcoholic drinks, while the second sets out to predict the use of alcohol in relation to selected individual values. University students (N = 1,500) were recruited through stratified sampling based on sex and academic discipline. The questionnaire used obtained information on participants' alcohol use, attitudes and personal values. The results show that the attitudes model correctly classifies 76.3% of cases. Likewise, the model for level of alcohol use correctly classifies 82% of cases. According to our results, we can conclude that there are a series of individual values that influence drinking and attitudes to alcohol use, which therefore provides us with a potentially powerful instrument for developing preventive intervention programs.
Resumo:
Moderate resolution remote sensing data, as provided by MODIS, can be used to detect and map active or past wildfires from daily records of suitable combinations of reflectance bands. The objective of the present work was to develop and test simple algorithms and variations for automatic or semiautomatic detection of burnt areas from time series data of MODIS biweekly vegetation indices for a Mediterranean region. MODIS-derived NDVI 250m time series data for the Valencia region, East Spain, were subjected to a two-step process for the detection of candidate burnt areas, and the results compared with available fire event records from the Valencia Regional Government. For each pixel and date in the data series, a model was fitted to both the previous and posterior time series data. Combining drops between two consecutive points and 1-year average drops, we used discrepancies or jumps between the pre and post models to identify seed pixels, and then delimitated fire scars for each potential wildfire using an extension algorithm from the seed pixels. The resulting maps of the detected burnt areas showed a very good agreement with the perimeters registered in the database of fire records used as reference. Overall accuracies and indices of agreement were very high, and omission and commission errors were similar or lower than in previous studies that used automatic or semiautomatic fire scar detection based on remote sensing. This supports the effectiveness of the method for detecting and mapping burnt areas in the Mediterranean region.
Resumo:
Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.
Resumo:
The Tertiary detritic aquifer of Madrid (TDAM), with an average thickness of 1500 m and a heterogeneous, anisotropic structure, supplies water to Madrid, the most populated city of Spain (3.2 million inhabitants in the metropolitan area). Besides its complex structure, a previous work focused in the north-northwest of Madrid city showed that the aquifer behaves quasi elastically trough extraction/recovery cycles and ground uplifting during recovery periods compensates most of the ground subsidence measured during previous extraction periods (Ezquerro et al., 2014). Therefore, the relationship between ground deformation and groundwater level through time can be simulated using simple elastic models. In this work, we model the temporal evolution of the piezometric level in 19 wells of the TDAM in the period 1997–2010. Using InSAR and piezometric time series spanning the studied period, we first estimate the elastic storage coefficient (Ske) for every well. Both, the Ske of each well and the average Ske of all wells, are used to predict hydraulic heads at the different well locations during the study period and compared against the measured hydraulic heads, leading to very similar errors when using the Ske of each well and the average Ske of all wells: 14 and 16 % on average respectively. This result suggests that an average Ske can be used to estimate piezometric level variations in all the points where ground deformation has been measured by InSAR, thus allowing production of piezometric level maps for the different extraction/recovery cycles in the TDAM.
Resumo:
The most straightforward European single energy market design would entail a European system operator regulated by a single European regulator. This would ensure the predictable development of rules for the entire EU, significantly reducing regulatory uncertainty for electricity sector investments. But such a first-best market design is unlikely to be politically realistic in the European context for three reasons. First, the necessary changes compared to the current situation are substantial and would produce significant redistributive effects. Second, a European solution would deprive member states of the ability to manage their energy systems nationally. And third, a single European solution might fall short of being well-tailored to consumers’ preferences, which differ substantially across the EU. To nevertheless reap significant benefits from an integrated European electricity market, we propose the following blueprint: First, we suggest adding a European system-management layer to complement national operation centres and help them to better exchange information about the status of the system, expected changes and planned modifications. The ultimate aim should be to transfer the day-to-day responsibility for the safe and economic operation of the system to the European control centre. To further increase efficiency, electricity prices should be allowed to differ between all network points between and within countries. This would enable throughput of electricity through national and international lines to be safely increased without any major investments in infrastructure. Second, to ensure the consistency of national network plans and to ensure that they contribute to providing the infrastructure for a functioning single market, the role of the European ten year network development plan (TYNDP) needs to be upgraded by obliging national regulators to only approve projects planned at European level unless they can prove that deviations are beneficial. This boosted role of the TYNDP would need to be underpinned by resolving the issues of conflicting interests and information asymmetry. Therefore, the network planning process should be opened to all affected stakeholders (generators, network owners and operators, consumers, residents and others) and enable the European Agency for the Cooperation of Energy Regulators (ACER) to act as a welfare-maximising referee. An ultimate political decision by the European Parliament on the entire plan will open a negotiation process around selecting alternatives and agreeing compensation. This ensures that all stakeholders have an interest in guaranteeing a certain degree of balance of interest in the earlier stages. In fact, transparent planning, early stakeholder involvement and democratic legitimisation are well suited for minimising as much as possible local opposition to new lines. Third, sharing the cost of network investments in Europe is a critical issue. One reason is that so far even the most sophisticated models have been unable to identify the individual long-term net benefit in an uncertain environment. A workable compromise to finance new network investments would consist of three components: (i) all easily attributable cost should be levied on the responsible party; (ii) all network users that sit at nodes that are expected to receive more imports through a line extension should be obliged to pay a share of the line extension cost through their network charges; (iii) the rest of the cost is socialised to all consumers. Such a cost-distribution scheme will involve some intra-European redistribution from the well-developed countries (infrastructure-wise) to those that are catching up. However, such a scheme would perform this redistribution in a much more efficient way than the Connecting Europe Facility’s ad-hoc disbursements to politically chosen projects, because it would provide the infrastructure that is really needed.
Regulation of European Banks and Business Models: Towards a new paradigm? CEPS Paperbacks. June 2012
Resumo:
Amidst talks of establishing an EU-wide banking union, the recent changes in the regulatory framework and the rethinking of the future of European banking structure, the future of EU bank regulation is inextricably linked to banks’ business models. Using a sample of over 70 banks, which overlaps with those subjected to the EBA’s 2011 stress tests, this report emphasizes the key regulatory gaps that emerge from a comprehensive analysis of the soundness and performance of bank business models and provides policy-makers with guidance to reinforce the evolving regulatory framework in European banking.
Resumo:
CEPS and the International Observatory on Financial Services Cooperatives (IOFSC) at HEC Montreal have initiated an annual monitoring exercise on banking business models in the EU. Based on their balance sheet structures, 147 European banks that account for more than 80% of the industry assets were categorised in four business models. The Monitor emphasises the ownership structures and assesses the financial and economic performance, resilience and robustness, before, during and after the financial and economic crises across retail diversified-, retail focused-, investment-, and wholesale oriented banks. Inter alia, this edition of the Monitor finds that banks that engage more in traditional retail banking activities with a mix of funding sources fared well as compared to other bank models during the different phases of the crisis.
Resumo:
We report quantitative results from three brittle thrust wedge experiments, comparing numerical results directly with each other and with corresponding analogue results. We first test whether the participating codes reproduce predictions from analytical critical taper theory. Eleven codes pass the stable wedge test, showing negligible internal deformation and maintaining the initial surface slope upon horizontal translation over a frictional interface. Eight codes participated in the unstable wedge test that examines the evolution of a wedge by thrust formation from a subcritical state to the critical taper geometry. The critical taper is recovered, but the models show two deformation modes characterised by either mainly forward dipping thrusts or a series of thrust pop-ups. We speculate that the two modes are caused by differences in effective basal boundary friction related to different algorithms for modelling boundary friction. The third experiment examines stacking of forward thrusts that are translated upward along a backward thrust. The results of the seven codes that run this experiment show variability in deformation style, number of thrusts, thrust dip angles and surface slope. Overall, our experiments show that numerical models run with different numerical techniques can successfully simulate laboratory brittle thrust wedge models at the cm-scale. In more detail, however, we find that it is challenging to reproduce sandbox-type setups numerically, because of frictional boundary conditions and velocity discontinuities. We recommend that future numerical-analogue comparisons use simple boundary conditions and that the numerical Earth Science community defines a plasticity test to resolve the variability in model shear zones.
Resumo:
Substantial retreat or disintegration of numerous ice shelves have been observed on the Antarctic Peninsula. The ice shelf in the Prince Gustav Channel retreated gradually since the late 1980's and broke-up in 1995. Tributary glaciers reacted with speed-up, surface lowering and increased ice discharge, consequently contributing to sea level rise. We present a detailed long-term study (1993-2014) on the dynamic response of Sjögren Inlet glaciers to the disintegration of Prince Gustav Ice Shelf. We analyzed various remote sensing datasets to observe the reactions of the glaciers to the loss of the buttressing ice shelf. A strong increase in ice surface velocities was observed with maximum flow speeds reaching 2.82±0.48 m/d in 2007 and 1.50±0.32 m/d in 2004 at Sjögren and Boydell glaciers respectively. Subsequently, the flow velocities decelerated, however in late 2014, we still measured about two times the values of our first measurements in 1996. The tributary glaciers retreated 61.7±3.1 km² behind the former grounding line of the ice shelf. In regions below 1000 m a.s.l., a mean surface lowering of -68±10 m (-3.1 m/a) was observed in the period 1993-2014. The lowering rate decreased to -2.2 m/a in recent years. Based on the surface lowering rates, geodetic mass balances of the glaciers were derived for different time steps. High mass loss rate of -1.21±0.36 Gt/a was found in the earliest period (1993-2001). Due to the dynamic adjustments of the glaciers to the new boundary conditions the ice mass loss reduced to -0.59±0.11 Gt/a in the period 2012-2014, resulting in an average mass loss rate of -0.89±0.16 Gt/a (1993-2014). Including the retreat of the ice front and grounding line, a total mass change of -38.5±7.7 Gt and a contribution to sea level rise of 0.061±0.013 mm were computed. Analysis of the ice flux revealed that available bedrock elevation estimates at Sjögren Inlet are too shallow and are the major uncertainty in ice flux computations. This temporally dense time series analysis of Sjögren Inlet glaciers shows that the adjustments of tributary glaciers to ice shelf disintegration are still going on and provides detailed information of the changes in glacier dynamics.
Resumo:
Late Pleistocene signals of calcium carbonate, organic carbon, and opaline silica concentration and accumulation are documented in a series of cores from a zonal/meridional/depth transect in the equatorial Atlantic Ocean to reconstruct the regional sedimentary history. Spectral analysis reveals that maxima and minima in biogenous sedimentation occur with glacial-interglacial cyclicity as a function of both (1) primary production at the sea surface modulated by orbitally forced variation in trade wind zonality and (2) destruction at the seafloor by variation in the chemical character of advected intermediate and deep water from high latitudes modulated by high-latitude ice volume. From these results a pattern emerges in which the relative proportion of signal variance from the productivity signal centered on the precessional (23 kyr) band decreases while that of the destruction signal centered on the obliquity (41 kyr) and eccentricity (100 kyr) periods increases below ~3600-m ocean depth.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al. , 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.
Resumo:
After ingestion of a standardized dose of ethanol, alcohol concentrations were assessed, over 3.5 hours from blood (six readings) and breath (10 readings) in a sample of 412 MZ and DZ twins who took part in an Alcohol Challenge Twin Study (ACTS). Nearly all participants were subsequently genotyped on two polymorphic SNPs in the ADH1B and ADH1C loci known to affect in vitro ADH activity. In the DZ pairs, 14 microsatellite markers covering a 20.5 cM region on chromosome 4 that includes the ADH gene family were assessed, Variation in the timed series of autocorrelated blood and breath alcohol readings was studied using a bivariate simplex design. The contribution of a quantitative trait locus (QTL) or QTL's linked to the ADH region was estimated via a mixture of likelihoods weighted by identity-by-descent probabilities. The effects of allelic substitution at the ADH1B and ADH1C loci were estimated in the means part of the model simultaneously with the effects sex and age. There was a major contribution to variance in alcohol metabolism due to a QTL which accounted for about 64% of the additive genetic covariation common to both blood and breath alcohol readings at the first time point. No effects of the ADH1B*47His or ADH1C*349Ile alleles on in vivo metabolism were observed, although these have been shown to have major effects in vitro. This implies that there is a major determinant of variation for in vivo alcohol metabolism in the ADH region that is not accounted for by these polymorphisms. Earlier analyses of these data suggested that alcohol metabolism is related to drinking behavior and imply that this QTL may be protective against alcohol dependence.