989 resultados para Box-Jenkins Approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we address the complexity of the analysis of water use in relation to the issue of sustainability. In fact, the flows of water in our planet represent a complex reality which can be studied using many different perceptions and narratives referring to different scales and dimensions of analysis. For this reason, a quantitative analysis of water use has to be based on analytical methods that are semantically open: they must be able to define what we mean with the term “water” when crossing different scales of analysis. We propose here a definition of water as a resource that deal with the many services it provides to humans and ecosystems. WE argue that water can fulfil so many of them since the element has many characteristics that allow for the resource to be labelled with different attributes, depending on the end use –such as drinkable. Since the services for humans and the functions for ecosystems associated with water flows are defined on different scales but still interconnected it is necessary to organize our assessment of water use across different hierarchical levels. In order to do so we define how to approach the study of water use in the Societal Metabolism, by proposing the Water Metabolism, tganized in three levels: societal level, ecosystem level and global level. The possible end uses we distinguish for the society are: personal/physiological use, household use, economic use. Organizing the study of “water use” across all these levels increases the usefulness of the quantitative analysis and the possibilities of finding relevant and comparable results. To achieve this result, we adapted a method developed to deal with multi-level, multi-scale analysis - the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach - to the analysis of water metabolism. In this paper, we discuss the peculiar analytical identity that “water” shows within multi-scale metabolic studies: water represents a flow-element when considering the metabolism of social systems (at a small scale, when describing the water metabolism inside the society) and a fund-element when considering the metabolism o ecosystems (at a larger scale when describing the water metabolism outside the society). The theoretical analysis is illustrated using two case which characterize the metabolic patterns regarding water use of a productive system in Catalonia and a water management policy in Andarax River Basin in Andalusia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today's approach to anti-doping is mostly centered on the judicial process, despite pursuing a further goal in the detection, reduction, solving and/or prevention of doping. Similarly to decision-making in the area of law enforcement feeding on Forensic Intelligence, anti-doping might significantly benefit from a more extensive gathering of knowledge. Forensic Intelligence might bring a broader logical dimension to the interpretation of data on doping activities for a more future-oriented and comprehensive approach instead of the traditional case-based and reactive process. Information coming from a variety of sources related to doping, whether directly or potentially, would feed an organized memory to provide real time intelligence on the size, seriousness and evolution of the phenomenon. Due to the complexity of doping, integrating analytical chemical results and longitudinal monitoring of biomarkers with physiological, epidemiological, sociological or circumstantial information might provide a logical framework enabling fit for purpose decision-making. Therefore, Anti-Doping Intelligence might prove efficient at providing a more proactive response to any potential or emerging doping phenomenon or to address existing problems with innovative actions or/and policies. This approach might prove useful to detect, neutralize, disrupt and/or prevent organized doping or the trafficking of doping agents, as well as helping to refine the targeting of athletes or teams. In addition, such an intelligence-led methodology would serve to address doping offenses in the absence of adverse analytical chemical evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today's approach to anti-doping is mostly centered on the judicial process, despite pursuing a further goal in the detection, reduction, solving and/or prevention of doping. Similarly to decision-making in the area of law enforcement feeding on Forensic Intelligence, anti-doping might significantly benefit from a more extensive gathering of knowledge. Forensic Intelligence might bring a broader logical dimension to the interpretation of data on doping activities for a more future-oriented and comprehensive approach instead of the traditional case-based and reactive process. Information coming from a variety of sources related to doping, whether directly or potentially, would feed an organized memory to provide real time intelligence on the size, seriousness and evolution of the phenomenon. Due to the complexity of doping, integrating analytical chemical results and longitudinal monitoring of biomarkers with physiological, epidemiological, sociological or circumstantial information might provide a logical framework enabling fit for purpose decision-making. Therefore, Anti-Doping Intelligence might prove efficient at providing a more proactive response to any potential or emerging doping phenomenon or to address existing problems with innovative actions or/and policies. This approach might prove useful to detect, neutralize, disrupt and/or prevent organized doping or the trafficking of doping agents, as well as helping to refine the targeting of athletes or teams. In addition, such an intelligence-led methodology would serve to address doping offenses in the absence of adverse analytical chemical evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding out whether Plasmodium spp. are coevolving with their vertebrate hosts is of both theoretical and applied interest and can influence our understanding of the effects and dynamics of malaria infection. In this study, we tested for local adaptation as a signature of coevolution between malaria blood parasites, Plasmodium spp. and its host, the great tit, Parus major. We conducted a reciprocal transplant experiment of birds in the field, where we exposed birds from two populations to Plasmodium parasites. This experimental set-up also provided a unique opportunity to study the natural history of malaria infection in the wild and to assess the effects of primary malaria infection on juvenile birds. We present three main findings: i) there was no support for local adaptation; ii) there was a male-biased infection rate; iii) infection occurred towards the end of the summer and differed between sites. There were also site-specific effects of malaria infection on the hosts. Taken together, we present one of the few experimental studies of parasite-host local adaptation in a natural malaria system, and our results shed light on the effects of avian malaria infection in the wild.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a support vector machine (SVM) approach for characterizing the feasible parameter set (FPS) in non-linear set-membership estimation problems is presented. It iteratively solves a regression problem from which an approximation of the boundary of the FPS can be determined. To guarantee convergence to the boundary the procedure includes a no-derivative line search and for an appropriate coverage of points on the FPS boundary it is suggested to start with a sequential box pavement procedure. The SVM approach is illustrated on a simple sine and exponential model with two parameters and an agro-forestry simulation model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The context of this report and the IRIDIA laboratory are described in the preface. Evolutionary Robotics and the box-pushing task are presented in the introduction.The building of a test system supporting Evolutionary Robotics experiments is then detailed. This system is made of a robot simulator and a Genetic Algorithm. It is used to explore the possibility of evolving box-pushing behaviours. The bootstrapping problem is explained, and a novel approach for dealing with it is proposed, with results presented.Finally, ideas for extending this approach are presented in the conclusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the need to make health services more accessible to persons who have migrated has been identified, knowledge about health-promotion programs (HPPs) from the perspective of older persons born abroad is lacking. This study explores the design experiences and content implemented in an adapted version of a group-based HPP developed in a researcher-community partnership. Fourteen persons aged 70-83 years or older who had migrated to Sweden from Finland or the Balkan Peninsula were included. A grounded theory approach guided the data collection and analysis. The findings showed how participants and personnel jointly helped raise awareness. The participants experienced three key processes that could open doors to awareness: enabling community, providing opportunities to understand and be understood, and confirming human values and abilities. Depending on how the HPP content and design are being shaped by the group, the key processes could both inhibit or encourage opening doors to awareness. Therefore, this study provides key insights into how to enable health by deepening the understanding of how the exchange of health-promoting messages is experienced to be facilitated or hindered. This study adds to the scientific knowledge base of how the design and content of HPP may support and recognize the capabilities of persons aging in the context of migration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we develop an approach to obtain analytical expressions for potentials in an impenetrable box. In this kind of system the expression has the advantage of being valid for arbitrary values of the box length, and respect the correct quantum limits. The similarity of this kind of problem with the quasi exactly solvable potentials is explored in order to accomplish our goals. Problems related to the break of symmetries and simultaneous eigenfunctions of commuting operators are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies have been carried out on the heat transfer in a packed bed of glass beads percolated by air at moderate flow rates. Rigorous statistic analysis of the experimental data was carried out and the traditional two parameter model was used to represent them. The parameters estimated were the effective radial thermal conductivity, k, and the wall coefficient, h, through the least squares method. The results were evaluated as to the boundary bed inlet temperature, T-o, number of terms of the solution series and number of experimental points used in the estimate. Results indicated that a small difference in T-o was sufficient to promote great modifications in the estimated parameters and in the statistical properties of the model. The use of replicas at points of high parametric information of the model improved the results, although analysis of the residuals has resulted in the rejection of this alternative. In order to evaluate cion-linearity of the model, Bates and Watts (1988) curvature measurements and the Box (1971) biases of the coefficients were calculated. The intrinsic curvatures of the model (IN) tend to be concentrated at low bed heights and those due to parameter effects (PE) are spread all over the bed. The Box biases indicated both parameters as responsible for the curvatures PE, h being somewhat more problematic. (C) 2000 Elsevier B.V. Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first measurement of the charged component of the underlying event using the novel jet-area/median approach is presented for proton-proton collisions at centre-of-mass energies of 0.9 and 7 TeV. The data were recorded in 2010 with the CMS experiment at the LHC. A new observable, sensitive to soft particle production, is introduced and investigated inclusively and as a function of the event scale defined by the transverse momentum of the leading jet. Various phenomenological models are compared to data, with and without corrections for detector effects. None of the examined models describe the data satisfactorily. © 2012 SISSA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The behavior of composed Web services depends on the results of the invoked services; unexpected behavior of one of the invoked services can threat the correct execution of an entire composition. This paper proposes an event-based approach to black-box testing of Web service compositions based on event sequence graphs, which are extended by facilities to deal not only with service behavior under regular circumstances (i.e., where cooperating services are working as expected) but also with their behavior in undesirable situations (i.e., where cooperating services are not working as expected). Furthermore, the approach can be used independently of artifacts (e.g., Business Process Execution Language) or type of composition (orchestration/choreography). A large case study, based on a commercial Web application, demonstrates the feasibility of the approach and analyzes its characteristics. Test generation and execution are supported by dedicated tools. Especially, the use of an enterprise service bus for test execution is noteworthy and differs from other approaches. The results of the case study encourage to suggest that the new approach has the power to detect faults systematically, performing properly even with complex and large compositions. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tethered bilayer lipid membranes (tBLMs) are a promising model system for the natural cell membrane. They consist of a lipid bilayer that is covalently coupled to a solid support via a spacer group. In this study, we developed a suitable approach to increase the submembrane space in tBLMs. The challenge is to create a membrane with a lower lipid density in order to increase the membrane fluidity, but to avoid defects that might appear due to an increase in the lateral space within the tethered monolayers. Therefore, various synthetic strategies and different monolayer preparation techniques were examined. Synthetical attempts to achieve a large ion reservoir were made in two directions: increasing the spacer length of the tether lipids and increasing the lateral distribution of the lipids in the monolayer. The first resulted in the synthesis of a small library of tether lipids (DPTT, DPHT and DPOT) characterized by 1H and 13C NMR, FD-MS, ATR, DSC and TGA. The synthetic strategy for their preparation includes synthesis of precursor with a double bond anchor that can be easily modified for different substrates (e.g. metal and metaloxide). Here, the double bond was modified into a thiol group suitable for gold surface. Another approach towards the preparation of homogeneous monolayers with decreased two-dimensional packing density was the synthesis of two novel anchor lipids: DPHDL and DDPTT. DPHDL is “self-diluted” tether lipid containing two lipoic anchor moieties. DDPTT has an extended lipophylic part that should lead to the preparation of diluted, leakage free proximal layers that will facilitate the completion of the bilayer. Our tool-box of tether lipids was completed with two fluorescent labeled lipid precursors with respectively one and two phytanyl chains in the hydrophobic region and a dansyl group as a fluorophore. The use of such fluorescently marked lipids is supposed to give additional information for the lipid distribution on the air-water interface. The Langmuir film balance was used to investigate the monolayer properties of four of the synthesized thiolated anchor lipids. The packing density and mixing behaviour were examined. The results have shown that mixing anchor with free lipids can homogeneously dilute the anchor lipid monolayers. Moreover, an increase in the hydrophylicity (PEG chain length) of the anchor lipids leads to a higher packing density. A decrease in the temperature results in a similar trend. However, increasing the number of phytanyl chains per lipid molecule is shown to decrease the packing density. LB-monolayers based on pure and mixed lipids in different ratio and transfer pressure were tested to form tBLMs with diluted inner layers. A combination of the LB-monolayer transfer with the solvent exchange method accomplished successfully the formation of tBLMs based on pure DPOT. Some preliminary investigations of the electrical sealing properties and protein incorporation of self-assembled DPOT and DDPTT-based tBLMs were conducted. The bilayer formation performed by solvent exchange resulted in membranes with high resistances and low capacitances. The appearance of space beneath the membrane is clearly visible in the impedance spectra expressed by a second RC element. The latter brings the conclusion that the longer spacer in DPOT and the bigger lateral space between the DDPTT molecules in the investigated systems essentially influence the electrical parameters of the membrane. Finally, we could show the functional incorporation of the small ion carrier valinomycin in both types of membranes.