926 resultados para Model-Data Integration and Data Assimilation
Resumo:
As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
In different regions of Brazil, population growth and economic development can degrade water quality, compromising watershed health and human supply. Because of its ability to combine spatial and temporal data in the same environment and to create water resources management (WRM) models, the Geographical Information System (GIS) is a powerful tool for managing water resources, preventing floods and estimating water supply. This paper discusses the integration between GIS and hydrological models and presents a case study relating to the upper section of the Paraíba do Sul Basin (Sao Paulo State portion), situated in the Southeast of Brazil. The case study presented in this paper has a database suitable for the basin's dimensions, including digitized topographic maps at a 50,000 scale. From an ArcGIS®/ArcHydro Framework Data Model, a geometric network was created to produce different raster products. This first grid derived from the digital elevation model grid (DEM) is the flow direction map followed by flow accumulation, stream and catchment maps. The next steps in this research are to include the different multipurpose reservoirs situated along the Paraíba do Sul River and to incorporate rainfall time series data in ArcHydro to build a hydrologic data model within a GIS environment in order to produce a comprehensive spatial-temporal model.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
This thesis work describes the creation of a pipework data structure for design system integration. Work is completed in pulp and paper plant delivery company with global engineering network operations in mind. User case of process design to 3D pipework design is introduced with influence of subcontracting engineering offices. Company data element list is gathered by using key person interviews and results are processed into a pipework data element list. Inter-company co-operation is completed in standardization association and common standard for pipework data elements is found. As result inter-company created pipework data element list is introduced. Further list usage, development and relations to design software vendors are evaluated.
Resumo:
The present study was designed to compare the homeostasis model assessment (HOMA) and quantitative insulin sensitivity check index (QUICKI) with data from forearm metabolic studies of healthy individuals and of subjects in various pathological states. Fifty-five healthy individuals and 112 patients in various pathological states, including type 2 diabetes mellitus, essential hypertension and others, were studied after an overnight fast and for 3 h after ingestion of 75 g of glucose, by HOMA, QUICKI and the forearm technique to estimate muscle uptake of glucose combined with indirect calorimetry (oxidative and non-oxidative glucose metabolism). The patients showed increased HOMA (1.88 ± 0.14 vs 1.13 ± 0.10 pmol/l x mmol/l) and insulin/glucose (I/G) index (1.058.9 ± 340.9 vs 518.6 ± 70.7 pmol/l x (mg/100 ml forearm)-1), and decreased QUICKI (0.36 ± 0.004 vs 0.39 ± 0.006 (µU/ml + mg/dl)-1) compared with the healthy individuals. Analysis of the data for the group as a whole (patients and healthy individuals) showed that the estimate of insulin resistance by HOMA was correlated with data obtained in the forearm metabolic studies (glucose uptake: r = -0.16, P = 0.04; non-oxidative glucose metabolism: r = -0.20. P = 0.01, and I/G index: r = 0.17, P = 0.03). The comparison of QUICKI with data of the forearm metabolic studies showed significant correlation between QUICKI and non-oxidative glucose metabolism (r = 0.17, P = 0.03) or I/G index (r = -0.37, P < 0.0001). The HOMA and QUICKI are good estimates of insulin sensitivity as data derived from forearm metabolic studies involving direct measurements of insulin action on muscle glucose metabolism.
Resumo:
Corporations practice company acquisitions in order to create shareholder’s value. During the last few decades, the companies in emerging markets have become active in the acquisition business. During the last decade, large and significant acquisitions have occurred especially in automotive industry. While domestic markets have become too competitive and companies are lacking required capabilities, they seek possibilities to expand into Western markets by attaining valuable assets through acquisitions of developed country corporations. This study discusses the issues and characteristics of these acquisitions through case studies. The purpose of this study was to identify the acquisition motives and strategies for post-transaction brand and product integration as well as analyze the effect of the motives to the integration strategy. The cases chosen for the research were Chinese Geely acquiring Swedish Volvo in 2010 and Indian Tata Motors buying British Jaguar Land Rover in 2008. The main topics were chosen according to their significance for companies in automotive industry as well as those are most visible parts for consumers. The study is based on qualitative case study methods, analyzing secondary data from academic papers and news articles as well as companies’ own announcements e.g. stock exchange and press releases. The study finds that the companies in the cases mainly possessed asset-seeking and market-seeking motives. In addition, the findings refer to rather minimal post-acquisition brand and product integration strategies. Mainly the parent companies left the target company autonomous to make their own business strategies and decisions. The most noticeable integrations were in the product development and production processes. Through restructuring the product architectures, the companies were able to share components and technology between product families and brands, which results in cutting down costs and in increase of profitability and efficiency. In the Geely- Volvo case, the strategy focused more on component sharing and product development know-how, whereas in Tata Motors-Jaguar Land Rover case, the main actions were to cut down costs through component sharing and combine production and distribution networks especially in Asian markets. However, it was evident that in both cases the integration and technology sharing were executed cautiously to prevent on harming the valuable image of the luxury brand. This study has concluded that the asset-seeking motives have significant influence on the posttransaction brand and model line-up integration strategies. By taking a cautious approach in acquiring assets, such as luxury brand, the companies in the cases have implemented a successful post-acquisition strategy and managed to create value for the shareholders at least in short-term. Yritykset harjoittavat yritysostoja luodakseen osakkeenomistajille lisäarvoa. Viimeisten muutamien vuosikymmenien aikana yritykset kehittyvissä maissa ovat myös aktivoituneet yritysostoissa. Viimeisen vuosikymmenen aikana erityisesti autoteollisuudessa on esiintynyt suuria ja merkittäviä yritysostoja. Koska kilpailu kotimaan markkinoilla on kiristynyt ja yritykset ovat vailla vaadittavia valmiuksia, ne etsivät mahdollisuuksiaan laajentaa länsimaisiin markkinoihin hankkimalla arvokkaita etuja kehittyneiden maiden yrityksistä yritysostojen avulla. Tämä tutkimus pohtii näiden yritysostojen olennaisia kysymyksiä ja ominaisuuksia casetutkimuksien kautta. Tutkimuksen tarkoitus oli tunnistaa sekä yritysostojen motiiveja ja brändi- ja mallisto-integraation strategioita että analysoida kyseisten motiivien vaikutusta integraatiostrategiaan. Tapaus-tutkimuksiksi valittiin kiinalaisen Geelyn yritysosto ruotsalaisesta Volvosta vuonna 2010 ja intialaisen Tata Motorsin yritysosto englantilaisesta Jaguar Land Roverista vuonna 2008. Tutkimus on kvalitatiivinen case-tutkimus ja siinä analysoidaan toissijaista tietoa sekä akateemisten ja uutisartikkeleiden että yritysten omien ilmoitusten, kuten pörssi- ja lehdistötiedotteiden, kautta. Tutkimuksen tulokset osoittavat, että tutkittujen yritysten toiminnat perustuivat motiiveihin, joita ajoivat etujen and uusien markkinoiden tarve. Sen lisäksi tutkimustulokset osoittivat, että yritysoston jälkeinen brändi- ja mallisto-integraatio pidettiin minimaalisena. Pääasiallisesti kohdeyrityksille jätettiin autonomia tehdä omat liikkeenjohdolliset päätökset yritysstrategioihin liittyen. Huomattavimmat integraatiot koskivat tuotekehityksellisiä ja tuotannollisia prosesseja. Kehittämällä uudelleen tuotearkkitehtuureja, yritykset pystyivät jakamaan komponentteja ja teknologiaa tuoteperheiden ja brändien välillä. Tämä mahdollisti kustannusleikkauksia sekä kannattavuuden ja tehokkuuden parantamista. Geely-Volvo –tapauksessa integraatiostrategia keskittyi komponenttien jakamiseen yhteisten tuotearkkitehtuurien avulla ja tuotekehityksen ammattitaitoon, kun taas Tata Motors-JLR –tapauksessa päätoiminnat olivat kustannuksien leikkaus sekä tuotannon ja jakeluverkoston yhdistäminen erityisesti Aasian maissa. Yhteistä yrityskaupoissa oli, että brändi- ja mallisto-integraatio sekä teknologian jakaminen suoritettiin varoen ehkäistäkseen arvokkaiden luksus-brändien tuotekuvan vahingoittamista. Tutkimuksen lopputulokset osoittavat, että yrityskaupan motiiveilla on huomattava vaikutus brändija mallisto-integraation strategiaan. Toteuttamalla varovaista lähestymistapaa luksus-brändin hankinnassa ja integraatiossa, yritykset ovat onnistuneet luomaan lisäarvoa osakkeenomistajille vähintään lyhyellä aikavälillä.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.
Resumo:
A system for continuous data assimilation is presented and discussed. To simulate the dynamical development a channel version of a balanced barotropic model is used and geopotential (height) data are assimilated into the models computations as data become available. In the first experiment the updating is performed every 24th, 12th and 6th hours with a given network. The stations are distributed at random in 4 groups in order to simulate 4 areas with different density of stations. Optimum interpolation is performed for the difference between the forecast and the valid observations. The RMS-error of the analyses is reduced in time, and the error being smaller the more frequent the updating is performed. The updating every 6th hour yields an error in the analysis less than the RMS-error of the observation. In a second experiment the updating is performed by data from a moving satellite with a side-scan capability of about 15°. If the satellite data are analysed at every time step before they are introduced into the system the error of the analysis is reduced to a value below the RMS-error of the observation already after 24 hours and yields as a whole a better result than updating from a fixed network. If the satellite data are introduced without any modification the error of the analysis is reduced much slower and it takes about 4 days to reach a comparable result to the one where the data have been analysed.
Resumo:
The currently available model-based global data sets of atmospheric circulation are a by-product of the daily requirement of producing initial conditions for numerical weather prediction (NWP) models. These data sets have been quite useful for studying fundamental dynamical and physical processes, and for describing the nature of the general circulation of the atmosphere. However, due to limitations in the early data assimilation systems and inconsistencies caused by numerous model changes, the available model-based global data sets may not be suitable for studying global climate change. A comprehensive analysis of global observations based on a four-dimensional data assimilation system with a realistic physical model should be undertaken to integrate space and in situ observations to produce internally consistent, homogeneous, multivariate data sets for the earth's climate system. The concept is equally applicable for producing data sets for the atmosphere, the oceans, and the biosphere, and such data sets will be quite useful for studying global climate change.
Resumo:
A global aerosol transport model (Oslo CTM2) with main aerosol components included is compared to five satellite retrievals of aerosol optical depth (AOD) and one data set of the satellite-derived radiative effect of aerosols. The model is driven with meteorological data for the period November 1996 to June 1997 which is the time period investigated in this study. The modelled AOD is within the range of the AOD from the various satellite retrievals over oceanic regions. The direct radiative effect of the aerosols as well as the atmospheric absorption by aerosols are in both cases found to be of the order of 20 Wm−2 in certain regions in both the satellite-derived and the modelled estimates as a mean over the period studied. Satellite and model data exhibit similar patterns of aerosol optical depth, radiative effect of aerosols, and atmospheric absorption of the aerosols. Recently published results show that global aerosol models have a tendency to underestimate the magnitude of the clear-sky direct radiative effect of aerosols over ocean compared to satellite-derived estimates. However, this is only to a small extent the case with the Oslo CTM2. The global mean direct radiative effect of aerosols over ocean is modelled with the Oslo CTM2 to be –5.5 Wm−2 and the atmospheric aerosol absorption 1.5 Wm−2.
Resumo:
The MATLAB model is contained within the compressed folders (versions are available as .zip and .tgz). This model uses MERRA reanalysis data (>34 years available) to estimate the hourly aggregated wind power generation for a predefined (fixed) distribution of wind farms. A ready made example is included for the wind farm distribution of Great Britain, April 2014 ("CF.dat"). This consists of an hourly time series of GB-total capacity factor spanning the period 1980-2013 inclusive. Given the global nature of reanalysis data, the model can be applied to any specified distribution of wind farms in any region of the world. Users are, however, strongly advised to bear in mind the limitations of reanalysis data when using this model/data. This is discussed in our paper: Cannon, Brayshaw, Methven, Coker, Lenaghan. "Using reanalysis data to quantify extreme wind power generation statistics: a 33 year case study in Great Britain". Submitted to Renewable Energy in March, 2014. Additional information about the model is contained in the model code itself, in the accompanying ReadMe file, and on our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/