878 resultados para Mixed model
Resumo:
Model catalysts of Pd nanoparticles and films on TiO2 (I 10) were fabricated by metal vapour deposition (MVD). Molecular beam measurements show that the particles are active for CO adsorption, with a global sticking probability of 0.25, but that they are deactivated by annealing above 600 K, an effect indicative of SMSI. The Pd nanoparticles are single crystals oriented with their (I 11) plane parallel to the surface plane of the titania. Analysis of the surface by atomic resolution STM shows that new structures have formed at the surface of the Pd nanoparticles and films after annealing above 800 K. There are only two structures, a zigzag arrangement and a much more complex "pinwheel" structure. The former has a unit cell containing 7 atoms, and the latter is a bigger unit cell containing 25 atoms. These new structures are due to an overlayer of titania that has appeared on the surface of the Pd nanoparticles after annealing, and it is proposed that the surface layer that causes the SMSI effect is a mixed alloy of Pd and Ti, with only two discrete ratios of atoms: Pd/Ti of 1: 1 (pinwheel) and 1:2 (zigzag). We propose that it is these structures that cause the SMSI effect. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
In this study, the processes affecting sea surface temperature variability over the 1992–98 period, encompassing the very strong 1997–98 El Niño event, are analyzed. A tropical Pacific Ocean general circulation model, forced by a combination of weekly ERS1–2 and TAO wind stresses, and climatological heat and freshwater fluxes, is first validated against observations. The model reproduces the main features of the tropical Pacific mean state, despite a weaker than observed thermal stratification, a 0.1 m s−1 too strong (weak) South Equatorial Current (North Equatorial Countercurrent), and a slight underestimate of the Equatorial Undercurrent. Good agreement is found between the model dynamic height and TOPEX/Poseidon sea level variability, with correlation/rms differences of 0.80/4.7 cm on average in the 10°N–10°S band. The model sea surface temperature variability is a bit weak, but reproduces the main features of interannual variability during the 1992–98 period. The model compares well with the TAO current variability at the equator, with correlation/rms differences of 0.81/0.23 m s−1 for surface currents. The model therefore reproduces well the observed interannual variability, with wind stress as the only interannually varying forcing. This good agreement with observations provides confidence in the comprehensive three-dimensional circulation and thermal structure of the model. A close examination of mixed layer heat balance is thus undertaken, contrasting the mean seasonal cycle of the 1993–96 period and the 1997–98 El Niño. In the eastern Pacific, cooling by exchanges with the subsurface (vertical advection, mixing, and entrainment), the atmospheric forcing, and the eddies (mainly the tropical instability waves) are the three main contributors to the heat budget. In the central–western Pacific, the zonal advection by low-frequency currents becomes the main contributor. Westerly wind bursts (in December 1996 and March and June 1997) were found to play a decisive role in the onset of the 1997–98 El Niño. They contributed to the early warming in the eastern Pacific because the downwelling Kelvin waves that they excited diminished subsurface cooling there. But it is mainly through eastward advection of the warm pool that they generated temperature anomalies in the central Pacific. The end of El Niño can be linked to the large-scale easterly anomalies that developed in the western Pacific and spread eastward, from the end of 1997 onward. In the far-western Pacific, because of the shallower than normal thermocline, these easterlies cooled the SST by vertical processes. In the central Pacific, easterlies pushed the warm pool back to the west. In the east, they led to a shallower thermocline, which ultimately allowed subsurface cooling to resume and to quickly cool the surface layer.
Resumo:
Canopy interception of incident precipitation is a critical component of the forest water balance during each of the four seasons. Models have been developed to predict precipitation interception from standard meteorological variables because of acknowledged difficulty in extrapolating direct measurements of interception loss from forest to forest. No known study has compared and validated canopy interception models for a leafless deciduous forest stand in the eastern United States. Interception measurements from an experimental plot in a leafless deciduous forest in northeastern Maryland (39°42'N, 75°5'W) for 11 rainstorms in winter and early spring 2004/05 were compared to predictions from three models. The Mulder model maintains a moist canopy between storms. The Gash model requires few input variables and is formulated for a sparse canopy. The WiMo model optimizes the canopy storage capacity for the maximum wind speed during each storm. All models showed marked underestimates and overestimates for individual storms when the measured ratio of interception to gross precipitation was far more or less, respectively, than the specified fraction of canopy cover. The models predicted the percentage of total gross precipitation (PG) intercepted to within the probable standard error (8.1%) of the measured value: the Mulder model overestimated the measured value by 0.1% of PG; the WiMo model underestimated by 0.6% of PG; and the Gash model underestimated by 1.1% of PG. The WiMo model’s advantage over the Gash model indicates that the canopy storage capacity increases logarithmically with the maximum wind speed. This study has demonstrated that dormant-season precipitation interception in a leafless deciduous forest may be satisfactorily predicted by existing canopy interception models.
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
A Bayesian method of classifying observations that are assumed to come from a number of distinct subpopulations is outlined. The method is illustrated with simulated data and applied to the classification of farms according to their level and variability of income. The resultant classification shows a greater diversity of technical charactersitics within farm types than is conventionally the case. The range of mean farm income between groups in the new classification is wider than that of the conventional method and the variability of income within groups is narrower. Results show that the highest income group in 2000 included large specialist dairy farmers and pig and poultry producers, whilst in 2001 it included large and small specialist dairy farms and large mixed dairy and arable farms. In both years the lowest income group is dominated by non-milk producing livestock farms.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
The successful implementation of just-in-time (JIT) purchasing policy in many industries has prompted many companies that still use the economic order quantity (EOQ) purchasing policy to ponder if they should switch to the JIT purchasing policy. Despite existing studies that directly compare the costs between the EOQ and JIT purchasing systems, this decision is, however, still difficult to be made, especially when price discount has to be considered. JIT purchasing may not always be successful even though plants that adopted JIT operations have experienced or can take advantage of physical space reduction. Hence, the objective of this study is to expand on a classical EOQ with a price discount model to derive the EOQ–JIT cost indifference point. The objective was tested and achieved through a survey and case study conducted in the ready-mixed concrete industry in Singapore.
Resumo:
Stirred, pH controlled batch cultures were carried out with faecal inocula and various chitosans to investigate the fermentation of chitosan derivatives by the human gut flora. Changes in bacterial levels and short chain fatty acids were measured over time. Low, medium and high molecular weight chitosan caused a decrease in bacteroides, bifidobacteria, clostridia and lactobacilli. A similar pattern was seen with chitosan oligosaccharide (COS). Butyrate levels also decreased. A three-stage fermentation model of the human colon was used for investigation of the metabolism of COS. In a region representing the proximal colon, clostridia decreased while lactobacilli increased. In the region representing the transverse colon, bacteroides and clostridia increased. Distally a small increase in bacteroides occurred. Butyrate levels increased. Under the highly competitive conditions of the human colon, many members of the microflora, are unable to compete for chitosans of low, medium or high molecular weight. COS were more easily utilised and when added to an in vitro colonic model led to increased production of butyrate, but some populations of potentially detrimental bacteria also increased. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Aims: Certain milk factors may promote the growth of a gastrointestinal microflora predominated by bifidobacteria and may aid in overcoming enteric infections. This may explain why breast-fed infants experience fewer intestinal infections than their formula-fed counterparts. The effect of formula supplementation with two such factors was investigated in this study. Methods and Results: Infant faecal specimens were used to ferment formulae supplemented with glycomacropeptide (GMP) and alpha-lactalbumin (alpha-la) in a two-stage compound continuous culture model. At steady state, all fermenter vessels were inoculated with 5 ml of 0.1 M phosphate-buffered saline (pH 7.2) containing 10(8) CFU ml(-1) of either enteropathogenic Escherichia coli 2348/69 (O127:H6) or Salmonella serotype Typhimurium (DSMZ 5569). Bacteriology was determined by independent fluorescence in situ hybridization. Vessels that contained breast milk (BM), as well as alpha-la and GMP supplemented formula had stable total counts of bifidobacteria while lactobacilli increased significantly only in vessels with breast milk. Bacteroides, clostridia and E. coli decreased significantly in all three groups prior to pathogen addition. Escherichia coli counts decreased in vessels containing BM and alpha-la while Salmonella decreased significantly in all vessels containing BM, alpha-la and GMP. Acetate was the predominant acid. Significance and Impact of the Study: Supplementation of infant formulae with appropriate milk proteins may be useful in mimicking the beneficial bacteriological effects of breast milk.
Resumo:
Crystal structure determination of adducts of sparteine and PhLi, (-)-sparteine and PhOLi and of sparteine and PhLi/PhOLi reveal a four-membered ring with two lithium centers, each capped by a (-)-sparteine ligand, as central motif of all structure. Quantum-chemical calculations show that the mixed aggregate [PhLi center dot PhOLi center dot 2(-)-sparteine] is energetically more favorable than the model system {1/2[PhLi center dot(-)-sparteine](2) + 1/2[PhOLi center dot(-)-sparteine](2)}.
A refined LEED analysis of water on Ru{0001}: an experimental test of the partial dissociation model
Resumo:
Despite a number of earlier studies which seemed to confirm molecular adsorption of water on close-packed surfaces of late transition metals, new controversy has arisen over a recent theoretical work by Feibelman, according to which partial dissociation occurs on the Ru{0001} surface leading to a mixed (H2O + OH + H) superstructure. Here, we present a refined LEED-IV analysis of the (root3 x root3)R30degrees-D2O-Ru{0001} structure, testing explicitly this new model by Feibelman. Our results favour the model proposed earlier by Held and Menzel assuming intact water molecules with almost coplanar oxygen atoms and out-of-plane hydrogen atoms atop the slightly higher oxygen atoms. The partially dissociated model with an almost identical arrangement of oxygen atoms can, however, not unambiguously be excluded, especially when the single hydrogen atoms are not present in the surface unit cell. In contrast to the earlier LEED-IV analysis, we can, however, clearly exclude a buckled geometry of oxygen atoms.
Resumo:
Two mixed bridged one-dimensional (1D) polynuclear complexes, [Cu3L2(mu(1,1)-N-3)(2)(mu-Cl)Cl](n) (1) and {[Cu3L2(mu-Cl)(3)Cl]center dot 0.46CH(3)OH}(n), (2), have been synthesized using the tridentate reduced Schiff-base ligand HL (2-[(2-dimethylamino-ethylamino)-methyl]-phenol). The complexes have been characterized by X-ray structural analyses and variable-temperature magnetic susceptibility measurements. In both complexes the basic trinuclear angular units are joined together by weak chloro bridges to form a 1D chain. The trinuclear structure of 1 is composed of two terminal square planar [Cu(L)(mu(1,1)-N-3)] units connected by a central Cu(II) atom through bridging nitrogen atoms of end-on azido ligands and the phenoxo oxygen atom of the tridentate ligand. These four coordinating atoms along with a chloride ion form a distorted trigonal bipyramidal geometry around the central Cu(II). The structure of 2 is similar; the only difference being a Cl bridge replacing the mu(1,1)-N-3 bridge in the trinuclear unit. The magnetic properties of both trinuclear complexes can be very well reproduced with a simple linear symmetrical trimer model (H = JS(i)S(i+1)) with only one intracluster exchange coupling (J) including a weak intertrimer interaction (.j) reproduced with the molecular field approximation. This model provides very satisfactory fits for both complexes in the whole temperature range with the following parameters: g = 2.136(3), J = 93.9(3) cm(-1) and zj= -0.90(3) cm(-1) (z = 2) for 1 and g = 2.073(7), J = -44.9(4) cm(-1) and zJ = -1.26(6) cm(-1) (z = 2) for 2.
Resumo:
Road transport and shipping are copious sources of aerosols, which exert a 9 significant radiative forcing, compared to, for example, the CO2 emitted by these sectors. An 10 advanced atmospheric general circulation model, coupled to a mixed-layer ocean, is used to 11 calculate the climate response to the direct radiative forcing from such aerosols. The cases 12 considered include imposed distributions of black carbon and sulphate aerosols from road 13 transport, and sulphate aerosols from shipping; these are compared to the climate response 14 due to CO2 increases. The difficulties in calculating the climate response due to small 15 forcings are discussed, as the actual forcings have to be scaled by large amounts to enable a 16 climate response to be easily detected. Despite the much greater geographical inhomogeneity 17 in the sulphate forcing, the patterns of zonal and annual-mean surface temperature response 18 (although opposite in sign) closely resembles that resulting from homogeneous changes in 19 CO2. The surface temperature response to black carbon aerosols from road transport is shown 20 to be notably non-linear in scaling applied, probably due to the semi-direct response of clouds 21 to these aerosols. For the aerosol forcings considered here, the most widespread method of 22 calculating radiative forcing significantly overestimates their effect, relative to CO2, 23 compared to surface temperature changes calculated using the climate model.
Resumo:
This study describes the turbulent processes in the upper ocean boundary layer forced by a constant surface stress in the absence of the Coriolis force using large-eddy simulation. The boundary layer that develops has a two-layer structure, a well-mixed layer above a stratified shear layer. The depth of the mixed layer is approximately constant, whereas the depth of the shear layer increases with time. The turbulent momentum flux varies approximately linearly from the surface to the base of the shear layer. There is a maximum in the production of turbulence through shear at the base of the mixed layer. The magnitude of the shear production increases with time. The increase is mainly a result of the increase in the turbulent momentum flux at the base of the mixed layer due to the increase in the depth of the boundary layer. The length scale for the shear turbulence is the boundary layer depth. A simple scaling is proposed for the magnitude of the shear production that depends on the surface forcing and the average mixed layer current. The scaling can be interpreted in terms of the divergence of a mean kinetic energy flux. A simple bulk model of the boundary layer is developed to obtain equations describing the variation of the mixed layer and boundary layer depths with time. The model shows that the rate at which the boundary layer deepens does not depend on the stratification of the thermocline. The bulk model shows that the variation in the mixed layer depth is small as long as the surface buoyancy flux is small.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.