88 resultados para combined stage sintering model
Resumo:
The structure of the chiral kinked Pt{531} surface has been determined by low-energy electron diffraction intensity-versus-energy (LEED-IV) analysis and density functional theory (DFT). Large contractions and expansions of the vertical interlayer distances with respect to the bulk-terminated surface geometry were found for the first six layers (LEED: d(12) = 0.44 angstrom, d(23) = 0.69 angstrom, d(34) = 0.49 angstrom, d(45) = 0.95 angstrom, d(56) = 0.56 angstrom; DFT: d(12) = 0.51 angstrom, d(23) = 0.55 angstrom, d(34) = 0.74 angstrom, d(45) = 0.78 angstrom, d(56) = 0.63 angstrom; d(bulk) = 0.66 angstrom). Energy-dependent cancellations of LEED spots over unusually large energy ranges, up to 100 eV, can be explained by surface roughness and reproduced by applying a model involving 0.25 ML of vacancies and adatoms in the scattering calculations. The agreement between the results from LEED and DFT is not as good as in other cases, which could be due to this roughness of the real surface.
Resumo:
A generic model of Exergy Assessment is proposed for the Environmental Impact of the Building Lifecycle, with a special focus on the natural environment. Three environmental impacts: energy consumption, resource consumption and pollutant discharge have been analyzed with reference to energy-embodied exergy, resource chemical exergy and abatement exergy, respectively. The generic model of Exergy Assessment of the Environmental Impact of the Building Lifecycle thus formulated contains two sub-models, one from the aspect of building energy utilization and the other from building materials use. Combined with theories by ecologists such as Odum, the paper evaluates a building's environmental sustainability through its exergy footprint and environmental impacts. A case study from Chongqing, China illustrates the application of this method. From the case study, it was found that energy consumption constitutes 70–80% of the total environmental impact during a 50-year building lifecycle, in which the operation phase accounts for 80% of the total environmental impact, the building material production phase 15% and 5% for the other phases.
Resumo:
A spectral performance model, designed to simulate the system spectral throughput for each of the 21 channels in the HIRDLS radiometer, is described. This model uses the measured spectral characteristics of each of the components in the optical train, appropriately corrected for their optical environment, to determine the end-to-end spectral throughput profile for each channel. This profile is then combined with the predicted thermal emission from the atmosphere, arising from the height of interest, to establish an in-band (wanted) to out-of-band (unwanted) radiance ratio. The results from the use of the model demonstrate that the instrument level radiometric requirements for the instrument will be achieved. The optical arrangement and spectral design requirements for filtering in the HIRDLS instrument are described together with a presentation of the performance achieved for the complete set of manufactured filters. Compliance of the predicted passband throughput model to the spectral positioning requi rements of the instrument is also demonstrated.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
This paper describes a computational and statistical study of the influence of morphological changes on the electrophysiological response of neurons from an animal model of Alzheimer's Disease (AD). We combined experimental morphological data from rat hippocampal CA1 pyramidal cells with a well-established model of active membrane properties. Dendritic morphology and the somatic response to simulated current clamp conditions were then compared for cells from the control and the AD group. The computational approach allowed us to single out the influences of neuromorphology on neuronal response by eliminating the effects of active channel variability. The results did not reveal a simple relationship between morphological changes associated with AD and changes in neural response. However, they did suggest the existence of more complex than anticipated relationships between dendritic morphology and single-cell electrophysiology.
Resumo:
In this chapter we described how the inclusion of a model of a human arm, combined with the measurement of its neural input and a predictor, can provide to a previously proposed teleoperator design robustness under time delay. Our trials gave clear indications of the superiority of the NPT scheme over traditional as well as the modified Yokokohji and Yoshikawa architectures. Its fundamental advantages are: the time-lead of the slave, the more efficient, and providing a more natural feeling manipulation, and the fact that incorporating an operator arm model leads to more credible stability results. Finally, its simplicity allows less likely to fail local control techniques to be employed. However, a significant advantage for the enhanced Yokokohji and Yoshikawa architecture results from the very fact that it’s a conservative modification of current designs. Under large prediction errors, it can provide robustness through directing the master and slave states to their means and, since it relies on the passivity of the mechanical part of the system, it would not confuse the operator. An experimental implementation of the techniques will provide further evidence for the performance of the proposed architectures. The employment of neural networks and fuzzy logic, which will provide an adaptive model of the human arm and robustifying control terms, is scheduled for the near future.
Resumo:
Measured process data normally contain inaccuracies because the measurements are obtained using imperfect instruments. As well as random errors one can expect systematic bias caused by miscalibrated instruments or outliers caused by process peaks such as sudden power fluctuations. Data reconciliation is the adjustment of a set of process data based on a model of the process so that the derived estimates conform to natural laws. In this paper, techniques for the detection and identification of both systematic bias and outliers in dynamic process data are presented. A novel technique for the detection and identification of systematic bias is formulated and presented. The problem of detection, identification and elimination of outliers is also treated using a modified version of a previously available clustering technique. These techniques are also combined to provide a global dynamic data reconciliation (DDR) strategy. The algorithms presented are tested in isolation and in combination using dynamic simulations of two continuous stirred tank reactors (CSTR).
Resumo:
The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.
Resumo:
Subantarctic mode water (SAMW) has been shown to be a good indicator of anthropogenic climate change in coupled climate models. SAMW in a coupled climate model and the response of modeled SAMW to increasing CO2 are examined in detail. How SAMW adjusts from climatological values toward a new equilibrium in the coupled model, with different climatological temperature and salinity properties, is shown. The combined formation rate of SAMW and Antarctic intermediate water is calculated as approximately 18 Sv (Sv ≡ 106 m3 s−1) in the Indian sector of the Southern Ocean, slightly lower than climatological values would suggest. When forced with increasing CO2, SAMW is produced at a similar rate but at lower densities. This result suggests that the rate of heat uptake in this part of the ocean will be unchanged by anthropogenic forcing. The important signal in the response of SAMW is the shift to colder and fresher values on isopycnals that is believed to be related to changes in thermodynamic surface forcing. It is shown that, given uniform forcing, SAMW is expected to enhance the signal relative to other water masses. Independent increases in surface heating or freshwater forcing can produce changes similar to those observed, but the two different types of forcing are distinguishable using separate forcing experiments, hodographs, and passive anomaly tracers. The changes in SAMW forced by increasing CO2 are dominated by surface heating, but changes to freshwater fluxes are also important.
Resumo:
The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work
Resumo:
We investigate a simplified form of variational data assimilation in a fully nonlinear framework with the aim of extracting dynamical development information from a sequence of observations over time. Information on the vertical wind profile, w(z ), and profiles of temperature, T (z , t), and total water content, qt (z , t), as functions of height, z , and time, t, are converted to brightness temperatures at a single horizontal location by defining a two-dimensional (vertical and time) variational assimilation testbed. The profiles of T and qt are updated using a vertical advection scheme. A basic cloud scheme is used to obtain the fractional cloud amount and, when combined with the temperature field, this information is converted into a brightness temperature, using a simple radiative transfer scheme. It is shown that our model exhibits realistic behaviour with regard to the prediction of cloud, but the effects of nonlinearity become non-negligible in the variational data assimilation algorithm. A careful analysis of the application of the data assimilation scheme to this nonlinear problem is presented, the salient difficulties are highlighted, and suggestions for further developments are discussed.
Resumo:
This paper builds upon previous research on currency bands, and provides a model for the Colombian peso. Stochastic differential equations are combined with information related to the Colombian currency band to estimate competing models of the behaviour of the Colombian peso within the limits of the currency band. The resulting moments of the density function for the simulated returns describe adequately most of the characteristics of the sample returns data. The factor included to account for the intra-marginal intervention performed to drive the rate towards the Central Parity accounts only for 6.5% of the daily change, which supports the argument that intervention, if performed by the Central Bank, it is not directed to push the currency towards the limits. Moreover, the credibility of the Colombian Central Bank, Banco de la República’s ability to defend the band seems low.
Resumo:
Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.
Resumo:
This paper describes the implementation of a 3D variational (3D-Var) data assimilation scheme for a morphodynamic model applied to Morecambe Bay, UK. A simple decoupled hydrodynamic and sediment transport model is combined with a data assimilation scheme to investigate the ability of such methods to improve the accuracy of the predicted bathymetry. The inverse forecast error covariance matrix is modelled using a Laplacian approximation which is calibrated for the length scale parameter required. Calibration is also performed for the Soulsby-van Rijn sediment transport equations. The data used for assimilation purposes comprises waterlines derived from SAR imagery covering the entire period of the model run, and swath bathymetry data collected by a ship-borne survey for one date towards the end of the model run. A LiDAR survey of the entire bay carried out in November 2005 is used for validation purposes. The comparison of the predictive ability of the model alone with the model-forecast-assimilation system demonstrates that using data assimilation significantly improves the forecast skill. An investigation of the assimilation of the swath bathymetry as well as the waterlines demonstrates that the overall improvement is initially large, but decreases over time as the bathymetry evolves away from that observed by the survey. The result of combining the calibration runs into a pseudo-ensemble provides a higher skill score than for a single optimized model run. A brief comparison of the Optimal Interpolation assimilation method with the 3D-Var method shows that the two schemes give similar results.
Resumo:
Evidence from in vivo and in vitro studies suggests that the consumption of pro- and prebiotics may inhibit colon carcinogenesis; however, the mechanisms involved have, thus far, proved elusive. There are some indications from animal studies that the effects are being exerted during the promotion stage of carcinogenesis. One feature of the promotion stage of colorectal cancer is the disruption of tight junctions, leading to a loss of integrity across the intestinal barrier. We have used the Caco-2 human adenocarcinoma cell line as a model for the intestinal epithelia. Trans-epithelial electrical resistance measurements indicate Caco-2 monolayer integrity, and we recorded changes to this integrity following exposure to the fermentation products of selected probiotics and prebiotics, in the form of nondigestible oligosaccharides (NDOs). Our results indicate that NDOs themselves exert varying, but generally minor, effects upon the strength of the tight junctions, whereas the fermentation products of probiotics and NDOs tend to raise tight junction integrity above that of the controls. This effect was bacterial species and oligosaccharide specific. Bifidobacterium Bb 12 was particularly effective, as were the fermentation products of Raftiline and Raftilose. We further investigated the ability of Raftilose fermentations to protect against the negative effects of deoxycholic acid (DCA) upon tight junction integrity. We found protection to be species dependent and dependent upon the presence of the fermentation products in the media at the same time as or after exposure to the DCA. Results suggest that the Raftilose fermentation products may prevent disruption of the intestinal epithelial barrier function during damage by tumor promoters.