906 resultados para Autoregressive-Moving Average model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Operational forecasting centres are currently developing data assimilation systems for coupled atmosphere-ocean models. Strongly coupled assimilation, in which a single assimilation system is applied to a coupled model, presents significant technical and scientific challenges. Hence weakly coupled assimilation systems are being developed as a first step, in which the coupled model is used to compare the current state estimate with observations, but corrections to the atmosphere and ocean initial conditions are then calculated independently. In this paper we provide a comprehensive description of the different coupled assimilation methodologies in the context of four dimensional variational assimilation (4D-Var) and use an idealised framework to assess the expected benefits of moving towards coupled data assimilation. We implement an incremental 4D-Var system within an idealised single column atmosphere-ocean model. The system has the capability to run both strongly and weakly coupled assimilations as well as uncoupled atmosphere or ocean only assimilations, thus allowing a systematic comparison of the different strategies for treating the coupled data assimilation problem. We present results from a series of identical twin experiments devised to investigate the behaviour and sensitivities of the different approaches. Overall, our study demonstrates the potential benefits that may be expected from coupled data assimilation. When compared to uncoupled initialisation, coupled assimilation is able to produce more balanced initial analysis fields, thus reducing initialisation shock and its impact on the subsequent forecast. Single observation experiments demonstrate how coupled assimilation systems are able to pass information between the atmosphere and ocean and therefore use near-surface data to greater effect. We show that much of this benefit may also be gained from a weakly coupled assimilation system, but that this can be sensitive to the parameters used in the assimilation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing observation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14 °C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The climates of the mid-Holocene (MH), 6,000 years ago, and of the Last Glacial Maximum (LGM), 21,000 years ago, have extensively been simulated, in particular in the framework of the Palaeoclimate Modelling Intercomparion Project. These periods are well documented by paleo-records, which can be used for evaluating model results for climates different from the present one. Here, we present new simulations of the MH and the LGM climates obtained with the IPSL_CM5A model and compare them to our previous results obtained with the IPSL_CM4 model. Compared to IPSL_CM4, IPSL_CM5A includes two new features: the interactive representation of the plant phenology and marine biogeochemistry. But one of the most important differences between these models is the latitudinal resolution and vertical domain of their atmospheric component, which have been improved in IPSL_CM5A and results in a better representation of the mid-latitude jet-streams. The Asian monsoon’s representation is also substantially improved. The global average mean annual temperature simulated for the pre-industrial (PI) period is colder in IPSL_CM5A than in IPSL_CM4 but their climate sensitivity to a CO2 doubling is similar. Here we show that these differences in the simulated PI climate have an impact on the simulated MH and LGM climatic anomalies. The larger cooling response to LGM boundary conditions in IPSL_CM5A appears to be mainly due to differences between the PMIP3 and PMIP2 boundary conditions, as shown by a short wave radiative forcing/feedback analysis based on a simplified perturbation method. It is found that the sensitivity computed from the LGM climate is lower than that computed from 2 × CO2 simulations, confirming previous studies based on different models. For the MH, the Asian monsoon, stronger in the IPSL_CM5A PI simulation, is also more sensitive to the insolation changes. The African monsoon is also further amplified in IPSL_CM5A due to the impact of the interactive phenology. Finally the changes in variability for both models and for MH and LGM are presented taking the example of the El-Niño Southern Oscillation (ENSO), which is very different in the PI simulations. ENSO variability is damped in both model versions at the MH, whereas inconsistent responses are found between the two versions for the LGM. Part 2 of this paper examines whether these differences between IPSL_CM4 and IPSL_CM5A can be distinguished when comparing those results to palaeo-climatic reconstructions and investigates new approaches for model-data comparisons made possible by the inclusion of new components in IPSL_CM5A.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explores the decadal potential predictability of the Atlantic Meridional Overturning Circulation (AMOC) as represented in the IPSL-CM5A-LR model, along with the predictability of associated oceanic and atmospheric fields. Using a 1000-year control run, we analyze the prognostic potential predictability (PPP) of the AMOC through ensembles of simulations with perturbed initial conditions. Based on a measure of the ensemble spread, the modelled AMOC has an average predictive skill of 8 years, with some degree of dependence on the AMOC initial state. Diagnostic potential predictability of surface temperature and precipitation is also identified in the control run and compared to the PPP. Both approaches clearly bring out the same regions exhibiting the highest predictive skill. Generally, surface temperature has the highest skill up to 2 decades in the far North Atlantic ocean. There are also weak signals over a few oceanic areas in the tropics and subtropics. Predictability over land is restricted to the coastal areas bordering oceanic predictable regions. Potential predictability at interannual and longer timescales is largely absent for precipitation in spite of weak signals identified mainly in the Nordic Seas. Regions of weak signals show some dependence on AMOC initial state. All the identified regions are closely linked to decadal AMOC fluctuations suggesting that the potential predictability of climate arises from the mechanisms controlling these fluctuations. Evidence for dependence on AMOC initial state also suggests that studying skills from case studies may prove more useful to understand predictability mechanisms than computing average skill from numerous start dates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducing a parameterization of the interactions between wind-driven snow depth changes and melt pond evolution allows us to improve large scale models. In this paper we have implemented an explicit melt pond scheme and, for the first time, a wind dependant snow redistribution model and new snow thermophysics into a coupled ocean–sea ice model. The comparison of long-term mean statistics of melt pond fractions against observations demonstrates realistic melt pond cover on average over Arctic sea ice, but a clear underestimation of the pond coverage on the multi-year ice (MYI) of the western Arctic Ocean. The latter shortcoming originates from the concealing effect of persistent snow on forming ponds, impeding their growth. Analyzing a second simulation with intensified snow drift enables the identification of two distinct modes of sensitivity in the melt pond formation process. First, the larger proportion of wind-transported snow that is lost in leads directly curtails the late spring snow volume on sea ice and facilitates the early development of melt ponds on MYI. In contrast, a combination of higher air temperatures and thinner snow prior to the onset of melting sometimes make the snow cover switch to a regime where it melts entirely and rapidly. In the latter situation, seemingly more frequent on first-year ice (FYI), a smaller snow volume directly relates to a reduced melt pond cover. Notwithstanding, changes in snow and water accumulation on seasonal sea ice is naturally limited, which lessens the impacts of wind-blown snow redistribution on FYI, as compared to those on MYI. At the basin scale, the overall increased melt pond cover results in decreased ice volume via the ice-albedo feedback in summer, which is experienced almost exclusively by MYI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The General Ocean Turbulence Model (GOTM) is applied to the diagnostic turbulence field of the mixing layer (ML) over the equatorial region of the Atlantic Ocean. Two situations were investigated: rainy and dry seasons, defined, respectively, by the presence of the intertropical convergence zone and by its northward displacement. Simulations were carried out using data from a PIRATA buoy located on the equator at 23 degrees W to compute surface turbulent fluxes and from the NASA/GEWEX Surface Radiation Budget Project to close the surface radiation balance. A data assimilation scheme was used as a surrogate for the physical effects not present in the one-dimensional model. In the rainy season, results show that the ML is shallower due to the weaker surface stress and stronger stable stratification; the maximum ML depth reached during this season is around 15 m, with an averaged diurnal variation of 7 m depth. In the dry season, the stronger surface stress and the enhanced surface heat balance components enable higher mechanical production of turbulent kinetic energy and, at night, the buoyancy acts also enhancing turbulence in the first meters of depth, characterizing a deeper ML, reaching around 60 m and presenting an average diurnal variation of 30 m.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reactive oxygen species (ROS) appear to be involved in several neurodegenerative disorders. We tested the hypothesis that oxidative stress could have a role in the hippocampal neurodegeneration observed in temporal lobe epilepsy induced by pilocarpine. We first determined the spatio-temporal pattern of ROS generation, by means of detection with dihydroethidium oxidation, in the CA1 and CA3 areas and the dentate gyrus of the dorsal hippocampus during status epilepticus induced by pilocarpine. Fluoro-Jade B assays were also performed to detect degenerating neurons. ROS generation was increased in CA1, CA3 and the dentate gyrus after pilocarpine-induced seizures, which was accompanied by marked cell death. Treatment of rats with a NADPH oxidase inhibitor (apocynin) for 7 days prior to induction of status epilepticus was effective in decreasing both ROS production (by an average of 20%) and neurodegeneration (by an average of 61%). These results suggest an involvement of ROS generated by NADPH oxidase in neuronal death in the pilocarpine model of epilepsy. (C) 2010 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deviations from the average can provide valuable insights about the organization of natural systems. The present article extends this important principle to the systematic identification and analysis of singular motifs in complex networks. Six measurements quantifying different and complementary features of the connectivity around each node of a network were calculated, and multivariate statistical methods applied to identify singular nodes. The potential of the presented concepts and methodology was illustrated with respect to different types of complex real-world networks, namely the US air transportation network, the protein-protein interactions of the yeast Saccharomyces cerevisiae and the Roget thesaurus networks. The obtained singular motifs possessed unique functional roles in the networks. Three classic theoretical network models were also investigated, with the Barabasi-Albert model resulting in singular motifs corresponding to hubs, confirming the potential of the approach. Interestingly, the number of different types of singular node motifs as well as the number of their instances were found to be considerably higher in the real-world networks than in any of the benchmark networks. Copyright (C) EPLA, 2009

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Steatosis is diagnosed on the basis of the macroscopic aspect of the liver evaluated by the surgeon at the time of organ extraction or by means of a frozen biopsy. In the present study, the applicability of laser-induced fluorescence (LIF) spectroscopy was investigated as a method for the diagnosis of different degrees of steatosis experimentally induced in rats. Rats received a high-lipid diet for different periods of time. The animals were divided into groups according to the degree of induced steatosis diagnosis by histology. The concentration of fat in the liver was correlated with LIF by means of the steatosis fluorescence factor (SFF). The histology classification, according to liver fat concentration was, Severe Steatosis, Moderate Steatosis, Mild Steatosis and Control (no liver steatosis). Fluorescence intensity could be directly correlated with fat content. It was possible to estimate an average of fluorescence intensity variable by means of different confidence intervals (P=95%) for each steatosis group. SFF was significantly higher in the Severe Steatosis group (P < 0.001) compared with the Moderate Steatosis, Mild Steatosis and Control groups. The various degrees of steatosis could be directly correlated with SFF. LIF spectroscopy proved to be a method capable of identifying the degree of hepatic steatosis in this animal model, and has the potential of clinical application for non-invasive evaluation of the degree of steatosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson's disease (PD) is a degenerative illness whose cardinal symptoms include rigidity, tremor, and slowness of movement. In addition to its widely recognized effects PD can have a profound effect on speech and voice.The speech symptoms most commonly demonstrated by patients with PD are reduced vocal loudness, monopitch, disruptions of voice quality, and abnormally fast rate of speech. This cluster of speech symptoms is often termed Hypokinetic Dysarthria.The disease can be difficult to diagnose accurately, especially in its early stages, due to this reason, automatic techniques based on Artificial Intelligence should increase the diagnosing accuracy and to help the doctors make better decisions. The aim of the thesis work is to predict the PD based on the audio files collected from various patients.Audio files are preprocessed in order to attain the features.The preprocessed data contains 23 attributes and 195 instances. On an average there are six voice recordings per person, By using data compression technique such as Discrete Cosine Transform (DCT) number of instances can be minimized, after data compression, attribute selection is done using several WEKA build in methods such as ChiSquared, GainRatio, Infogain after identifying the important attributes, we evaluate attributes one by one by using stepwise regression.Based on the selected attributes we process in WEKA by using cost sensitive classifier with various algorithms like MultiPass LVQ, Logistic Model Tree(LMT), K-Star.The classified results shows on an average 80%.By using this features 95% approximate classification of PD is acheived.This shows that using the audio dataset, PD could be predicted with a higher level of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies a special class of vector smooth-transition autoregressive (VSTAR) models that contains common nonlinear features (CNFs), for which we proposed a triangular representation and developed a procedure of testing CNFs in a VSTAR model. We first test a unit root against a stable STAR process for each individual time series and then examine whether CNFs exist in the system by Lagrange Multiplier (LM) test if unit root is rejected in the first step. The LM test has standard Chi-squared asymptotic distribution. The critical values of our unit root tests and small-sample properties of the F form of our LM test are studied by Monte Carlo simulations. We illustrate how to test and model CNFs using the monthly growth of consumption and income data of United States (1985:1 to 2011:11).