44 resultados para Multivariate measurement model
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
Northern hemisphere snow water equivalent (SWE) distribution from remote sensing (SSM/I), the ERA40 reanalysis product and the HadCM3 general circulation model are compared. Large differences are seen in the February climatologies, particularly over Siberia. The SSM/I retrieval algorithm may be overestimating SWE in this region, while comparison with independent runoff estimates suggest that HadCM3 is underestimating SWE. Treatment of snow grain size and vegetation parameterizations are concerns with the remotely sensed data. For this reason, ERA40 is used as `truth' for the following experiments. Despite the climatology differences, HadCM3 is able to reproduce the distribution of ERA40 SWE anomalies when assimilating ERA40 anomaly fields of temperature, sea level pressure, atmospheric winds and ocean temperature and salinity. However when forecasts are released from these assimilated initial states, the SWE anomaly distribution diverges rapidly from that of ERA40. No predictability is seen from one season to another. Strong links between European SWE distribution and the North Atlantic Oscillation (NAO) are seen, but forecasts of this index by the assimilation scheme are poor. Longer term relationships between SWE and the NAO, and SWE and the El Ni\~no-Southern Oscillation (ENSO) are also investigated in a multi-century run of HadCM3. SWE is impacted by ENSO in the Himalayas and North America, while the NAO affects SWE in North America and Europe. While significant connections with the NAO index were only present in DJF (and to an extent SON), the link between ENSO and February SWE distribution was seen to exist from the previous JJA ENSO index onwards. This represents a long lead time for SWE prediction for hydrological applications such as flood and wildfire forecasting. Further work is required to develop reliable large scale observation-based SWE datasets with which to test these model-derived connections.
A hierarchical Bayesian model for predicting the functional consequences of amino-acid polymorphisms
Resumo:
Genetic polymorphisms in deoxyribonucleic acid coding regions may have a phenotypic effect on the carrier, e.g. by influencing susceptibility to disease. Detection of deleterious mutations via association studies is hampered by the large number of candidate sites; therefore methods are needed to narrow down the search to the most promising sites. For this, a possible approach is to use structural and sequence-based information of the encoded protein to predict whether a mutation at a particular site is likely to disrupt the functionality of the protein itself. We propose a hierarchical Bayesian multivariate adaptive regression spline (BMARS) model for supervised learning in this context and assess its predictive performance by using data from mutagenesis experiments on lac repressor and lysozyme proteins. In these experiments, about 12 amino-acid substitutions were performed at each native amino-acid position and the effect on protein functionality was assessed. The training data thus consist of repeated observations at each position, which the hierarchical framework is needed to account for. The model is trained on the lac repressor data and tested on the lysozyme mutations and vice versa. In particular, we show that the hierarchical BMARS model, by allowing for the clustered nature of the data, yields lower out-of-sample misclassification rates compared with both a BMARS and a frequen-tist MARS model, a support vector machine classifier and an optimally pruned classification tree.
Resumo:
A quasi-optical technique for characterizing micromachined waveguides is demonstrated with wideband time-resolved terahertz spectroscopy. A transfer-function representation is adopted for the description of the relation between the signals in the input and output port of the waveguides. The time-domain responses were discretized, and the waveguide transfer function was obtained through a parametric approach in the z domain after describing the system with an autoregressive with exogenous input model. The a priori assumption of the number of modes propagating in the structure was inferred from comparisons of the theoretical with the measured characteristic impedance as well as with parsimony arguments. Measurements for a precision WR-8 waveguide-adjustable short as well as for G-band reduced-height micromachined waveguides are presented. (C) 2003 Optical Society of America.
Resumo:
A low-temperature model is described for infrared multilayer filters containing PbTe (or other semiconductor) and ZnSe (or other II/VI). The model is based on dielectric dispersion with semiconductor carrier dispersion added. It predicts an improved performance on cooling such as would be useful to avoid erroneous signals from optics in spaceflight radiometers. Agreement with measurement is obtained over the initial temperature range 70-400K and wavelength range 2.5-20µm.
Resumo:
Results from both experimental measurements and 3D numerical simulations of Ground Source Heat Pump systems (GSHP) at a UK climate are presented. Experimental measurements of a horizontal-coupled slinky GSHP were undertaken in Talbot Cottage at Drayton St Leonard site, Oxfordshire, UK. The measured thermophysical properties of in situ soil were used in the CFD model. The thermal performance of slinky heat exchangers for the horizontal-coupled GSHP system for different coil diameters and slinky interval distances was investigated using a validated 3D model. Results from a two month period of monitoring the performance of the GSHP system showed that the COP decreased with the running time. The average COP of the horizontal-coupled GSHP was 2.5. The numerical prediction showed that there was no significant difference in the specific heat extraction of the slinky heat exchanger at different coil diameters. However, the larger the diameter of coil, the higher the heat extraction per meter length of soil. The specific heat extraction also increased, but the heat extraction per meter length of soil decreased with the increase of coil central interval distance.
Resumo:
In this chapter we described how the inclusion of a model of a human arm, combined with the measurement of its neural input and a predictor, can provide to a previously proposed teleoperator design robustness under time delay. Our trials gave clear indications of the superiority of the NPT scheme over traditional as well as the modified Yokokohji and Yoshikawa architectures. Its fundamental advantages are: the time-lead of the slave, the more efficient, and providing a more natural feeling manipulation, and the fact that incorporating an operator arm model leads to more credible stability results. Finally, its simplicity allows less likely to fail local control techniques to be employed. However, a significant advantage for the enhanced Yokokohji and Yoshikawa architecture results from the very fact that it’s a conservative modification of current designs. Under large prediction errors, it can provide robustness through directing the master and slave states to their means and, since it relies on the passivity of the mechanical part of the system, it would not confuse the operator. An experimental implementation of the techniques will provide further evidence for the performance of the proposed architectures. The employment of neural networks and fuzzy logic, which will provide an adaptive model of the human arm and robustifying control terms, is scheduled for the near future.
Resumo:
The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.
Resumo:
The Geostationary Earth Radiation Budget Intercomparison of Longwave and Shortwave radiation (GERBILS) was an observational field experiment over North Africa during June 2007. The campaign involved 10 flights by the FAAM BAe-146 research aircraft over southwestern parts of the Sahara Desert and coastal stretches of the Atlantic Ocean. Objectives of the GERBILS campaign included characterisation of mineral dust geographic distribution and physical and optical properties, assessment of the impact upon radiation, validation of satellite remote sensing retrievals, and validation of numerical weather prediction model forecasts of aerosol optical depths (AODs) and size distributions. We provide the motivation behind GERBILS and the experimental design and report the progress made in each of the objectives. We show that mineral dust in the region is relatively non-absorbing (mean single scattering albedo at 550 nm of 0.97) owing to the relatively small fraction of iron oxides present (1–3%), and that detailed spectral radiances are most accurately modelled using irregularly shaped particles. Satellite retrievals over bright desert surfaces are challenging owing to the lack of spectral contrast between the dust and the underlying surface. However, new techniques have been developed which are shown to be in relatively good agreement with AERONET estimates of AOD and with each other. This encouraging result enables relatively robust validation of numerical models which treat the production, transport, and deposition of mineral dust. The dust models themselves are able to represent large-scale synoptically driven dust events to a reasonable degree, but some deficiencies remain both in the Sahara and over the Sahelian region, where cold pool outflow from convective cells associated with the intertropical convergence zone can lead to significant dust production.
Resumo:
The ozone-ethene reaction has been investigated at low pressure in a flow-tube interfaced to a u.v. photoelectron spectrometer. Photoelectron spectra recorded as a function of reaction time have been used to estimate partial pressures of the reagents and products, using photoionization cross-sections for selected photoelectron bands of the reagents and products, which have been measured separately. Product yields compare favourably with results of other studies, and the production of oxygen and acetaldehyde have been measured as a function of time for the first time. A reaction scheme developed for the ozone-ethene reaction has been used to simulate the reagents and products as a function of time. The results obtained are in good agreement with the experimental measurements. For each of the observed products, the simulations allow the main reaction (or reactions) for production of that product to be established. The product yields have been used in a global model to estimate their global annual emissions in the atmosphere. Of particular interest are the calculated global annual emissions of formaldehyde (0.96 ± 0.10 Tg) and formic acid, (0.05 ± 0.01 Tg) which are estimated as 0.04% and 0.7% of the total annual emission respectively.
An isotope dilution model for partitioning phenylalanine uptake by the liver of lactating dairy cows
Resumo:
An isotope dilution model for partitioning phenylalanine uptake by the liver of the lactating dairy cow was constructed and solved in the steady state. If assumptions are made, model solution permits calculation of the rate of phenylalanine uptake from portal vein and hepatic arterial blood supply, phenylalanine release into the hepatic vein, phenylalanine oxidation and synthesis, and degradation of hepatic constitutive and export proteins. The model requires the measurement of plasma fow rate through the liver in combination with phenylalanine concentrations and plateau isotopic enrichments in arterial, portal and hepatic plasma during a constant infusion of [1-13C]phenylalanine tracer. The model can be applied to other amino acids with similar metabolic fates and will provide a means for assessing the impact of hepatic metabolism on amino acid availability to peripheral tissues. This is of particular importance for the dairy cow when considering the requirements for milk protein synthesis and the negative environmental impact of excessive nitrogen excretion.
Resumo:
Nitrogen adsorption on carbon nanotubes is wide- ly studied because nitrogen adsorption isotherm measurement is a standard method applied for porosity characterization. A further reason is that carbon nanotubes are potential adsorbents for separation of nitrogen from oxygen in air. The study presented here describes the results of GCMC simulations of nitrogen (three site model) adsorption on single and multi walled closed nanotubes. The results obtained are described by a new adsorption isotherm model proposed in this study. The model can be treated as the tube analogue of the GAB isotherm taking into account the lateral adsorbate-adsorbate interactions. We show that the model describes the simulated data satisfactorily. Next this new approach is applied for a description of experimental data measured on different commercially available (and characterized using HRTEM) carbon nanotubes. We show that generally a quite good fit is observed and therefore it is suggested that the observed mechanism of adsorption in the studied materials is mainly determined by adsorption on tubes separated at large distances, so the tubes behave almost independently.
Resumo:
The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.
Resumo:
Cross-bred cow adoption is an important and potent policy variable precipitating subsistence household entry into emerging milk markets. This paper focuses on the problem of designing policies that encourage and sustain milkmarket expansion among a sample of subsistence households in the Ethiopian highlands. In this context it is desirable to measure households’ ‘proximity’ to market in terms of the level of deficiency of essential inputs. This problem is compounded by four factors. One is the existence of cross-bred cow numbers (count data) as an important, endogenous decision by the household; second is the lack of a multivariate generalization of the Poisson regression model; third is the censored nature of the milk sales data (sales from non-participating households are, essentially, censored at zero); and fourth is an important simultaneity that exists between the decision to adopt a cross-bred cow, the decision about how much milk to produce, the decision about how much milk to consume and the decision to market that milk which is produced but not consumed internally by the household. Routine application of Gibbs sampling and data augmentation overcome these problems in a relatively straightforward manner. We model the count data from two sites close to Addis Ababa in a latent, categorical-variable setting with known bin boundaries. The single-equation model is then extended to a multivariate system that accommodates the covariance between crossbred-cow adoption, milk-output, and milk-sales equations. The latent-variable procedure proves tractable in extension to the multivariate setting and provides important information for policy formation in emerging-market settings
Resumo:
Red tape is not desirable as it impedes business growth. Relief from the administrative burdens that businesses face due to legislation can benefit the whole economy, especially at times of recession. However, recent governmental initiatives aimed at reducing administrative burdens have encountered some success, but also failures. This article compares three national initiatives - in the Netherlands, UK and Italy - aimed at cutting red tape by using the Standard Cost Model. Findings highlight the factors affecting the outcomes of measurement and reduction plans and ways to improve the Standard Cost Model methodology.