929 resultados para Adsorption. Zeolite 13X. Langmuir model. Dynamic modeling. Pyrolysis of sewage sludge
Resumo:
Foundation construction process has been an important key point in a successful construction engineering. The frequency of using diaphragm wall construction method among many deep excavation construction methods in Taiwan is the highest in the world. The traditional view of managing diaphragm wall unit in the sequencing of construction activities is to establish each phase of the sequencing of construction activities by heuristics. However, it conflicts final phase of engineering construction with unit construction and effects planning construction time. In order to avoid this kind of situation, we use management of science in the study of diaphragm wall unit construction to formulate multi-objective combinational optimization problem. Because the characteristic (belong to NP-Complete problem) of problem mathematic model is multi-objective and combining explosive, it is advised that using the 2-type Self-Learning Neural Network (SLNN) to solve the N=12, 24, 36 of diaphragm wall unit in the sequencing of construction activities program problem. In order to compare the liability of the results, this study will use random researching method in comparison with the SLNN. It is found that the testing result of SLNN is superior to random researching method in whether solution-quality or Solving-efficiency.
Resumo:
The burning of tobacco creates various types of free radicals that have been reported to be biologically active. Some radicals are transient but can initiate catalytic cycles that generate other free radicals. Other radicals are environmentally persistent and can exist in total particulate matter (TPM) for extended periods. In spite of their importance, little is known concerning the precursors of these radicals or under what pyrolysis/combustion conditions they are formed. We performed studies of the formation of radicals from the gas-phase pyrolysis and oxidative pyrolysis of hydroquinone (HQ) and catechol (CT) between 750 and 1000 °C and phenol from 500 to 1000 °C. The initial electron paramagnetic resonance (EPR) spectra were complex, indicating the presence of multiple radicals. Using matrix annealing and microwave power saturation techniques, phenoxyl, cyclopentadienyl, and peroxyl radicals were identifiable, but only cyclopentadienyl radicals were stable above 750 °C.
Resumo:
esponse to dietary fat manipulation is highly heterogeneous, yet generic population-based recommendations aimed at reducing the burden of CVD are given. The APOE epsilon genotype has been proposed to be an important determinant of this response. The present study reports on the dietary strategy employed in the SATgenɛ (SATurated fat and gene APOE) study, to assess the impact of altered fat content and composition on the blood lipid profile according to the APOE genotype. A flexible dietary exchange model was developed to implement three isoenergetic diets: a low-fat (LF) diet (target composition: 24 % of energy (%E) as fat, 8 %E SFA and 59 %E carbohydrate), a high-saturated fat (HSF) diet (38 %E fat, 18 %E SFA and 45 %E carbohydrate) and a HSF-DHA diet (HSF diet with 3 g DHA/d). Free-living participants (n 88; n 44 E3/E3 and n 44 E3/E4) followed the diets in a sequential design for 8 weeks, each using commercially available spreads, oils and snacks with specific fatty acid profiles. Dietary compositional targets were broadly met with significantly higher total fat (42·8 %E and 41·0 %E v. 25·1 %E, P ≤ 0·0011) and SFA (19·3 %E and 18·6 %E v. 8·33 %E, P ≤ 0·0011) intakes during the HSF and HSF-DHA diets compared with the LF diet, in addition to significantly higher DHA intake during the HSF-DHA diet (P ≤ 0·0011). Plasma phospholipid fatty acid analysis revealed a 2-fold increase in the proportion of DHA after consumption of the HSF-DHA diet for 8 weeks, which was independent of the APOE genotype. In summary, the dietary strategy was successfully implemented in a free-living population resulting in well-tolerated diets which broadly met the dietary targets set.
Resumo:
Acrylamide is formed from reducing sugars and asparagine during the preparation of French fries. The commercial preparation of French fries is a multi-stage process involving the preparation of frozen, par-fried potato strips for distribution to catering outlets where they are finish fried. The initial blanching, treatment in glucose solution and par-frying steps are crucial since they determine the levels of precursors present at the beginning of the finish frying process. In order to minimize the quantities of acrylamide in cooked fries, it is important to understand the impact of each stage on the formation of acrylamide. Acrylamide, amino acids, sugars, moisture, fat and color were monitored at time intervals during the frying of potato strips which had been dipped in varying concentrations of glucose and fructose during a typical pretreatment. A mathematical model of the finish-frying was developed based on the fundamental chemical reaction pathways, incorporating moisture and temperature gradients in the fries. This showed the contribution of both glucose and fructose to the generation of acrylamide, and accurately predicted the acrylamide content of the final fries.
Resumo:
Acrylamide is formed from reducing sugars and asparagine during the preparation of French fries. The commercial preparation of French fries is a multistage process involving the preparation of frozen, par-fried potato strips for distribution to catering outlets, where they are finish-fried. The initial blanching, treatment in glucose solution, and par-frying steps are crucial because they determine the levels of precursors present at the beginning of the finish-frying process. To minimize the quantities of acrylamide in cooked fries, it is important to understand the impact of each stage on the formation of acrylamide. Acrylamide, amino acids, sugars, moisture, fat, and color were monitored at time intervals during the frying of potato strips that had been dipped in various concentrations of glucose and fructose during a typical pretreatment. A mathematical model based on the fundamental chemical reaction pathways of the finish-frying was developed, incorporating moisture and temperature gradients in the fries. This showed the contribution of both glucose and fructose to the generation of acrylamide and accurately predicted the acrylamide content of the final fries.
Resumo:
Three Salmonella enterica serovar Orion var. 15+ isolates of distinct provenance were tested for survival in various stress assays. All were less able to survive desiccation than a virulent S. Enreritidis strain, with levels of survival similar to a rpoS mutant of the S. Enteritidis strain, whereas one isolate (F3720) was significantly more acid tolerant. The S. Orion var. 15+ isolates were motile by flagellae and elaborated type-1 and curli-like fimbriae; surface organelles that are considered virulence determinants in Salmonella pathogenesis. Each adhered and invaded HEp-2 tissue culture cells with similar proficiency to the S. Enteritidis control but were significantly less virulent than S. En teritidis in the one-day-old and seven-day-old chick model. Given an oral dose of 1 x 10(3) cfu to one-day-old chicken, S. Orion var. 15+ isolates colonised 25% of liver and spleens examined at 24 h whereas S. Enteritidis colonised 100% of organs by the same with the same dose. Given an oral dose of 1 x 10(7) cfu at seven-day old, S. Orion var. 15+ failed to colonise livers and spleens in any bird examined at 24 h whereas S. Enteritidis colonised 50% of organs by the same with the same dose. Based on the number of internal organs colonised, one of the three S. Orion var. 15+ isolates tested (strain F3720) was significantly more invasive than the other two (B1 and B7). Also, strain F3720 was shed less than either B1 or B7 supporting the concept that there may be an inverse relationship between the ability to colonise deep tissues and to persist in the gut. These data are discussed in the light that S. Orion var. 15+ is associated with sporadic outbreaks of human infection rather than epidemics.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.
Resumo:
A simple four-dimensional assimilation technique, called Newtonian relaxation, has been applied to the Hamburg climate model (ECHAM), to enable comparison of model output with observations for short periods of time. The prognostic model variables vorticity, divergence, temperature, and surface pressure have been relaxed toward European Center for Medium-Range Weather Forecasts (ECMWF) global meteorological analyses. Several experiments have been carried out, in which the values of the relaxation coefficients have been varied to find out which values are most usable for our purpose. To be able to use the method for validation of model physics or chemistry, good agreement of the model simulated mass and wind field is required. In addition, the model physics should not be disturbed too strongly by the relaxation forcing itself. Both aspects have been investigated. Good agreement with basic observed quantities, like wind, temperature, and pressure is obtained for most simulations in the extratropics. Derived variables, like precipitation and evaporation, have been compared with ECMWF forecasts and observations. Agreement for these variables is smaller than for the basic observed quantities. Nevertheless, considerable improvement is obtained relative to a control run without assimilation. Differences between tropics and extratropics are smaller than for the basic observed quantities. Results also show that precipitation and evaporation are affected by a sort of continuous spin-up which is introduced by the relaxation: the bias (ECMWF-ECHAM) is increasing with increasing relaxation forcing. In agreement with this result we found that with increasing relaxation forcing the vertical exchange of tracers by turbulent boundary layer mixing and, in a lesser extent, by convection, is reduced.
Resumo:
In this study a gridded hourly 1-km precipitation dataset for a meso-scale catchment (4,062 km2) of the Upper Severn River, UK was constructed using rainfall radar data to disaggregate a daily precipitation (rain gauge) dataset. The dataset was compared to an hourly precipitation dataset created entirely from rainfall radar data. Results found that when assessed against gauge readings and as input to the Lisflood-RR hydrological model, the rain gauge/radar disaggregated dataset performed the best suggesting that this simple method of combining rainfall radar data with rain gauge readings can provide temporally detailed precipitation datasets for calibrating hydrological models.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Rafting is one of the important deformation mechanisms of sea ice. This process is widespread in the north Caspian Sea, where multiple rafting produces thick sea ice features, which are a hazard to offshore operations. Here we present a one-dimensional, thermal consolidation model for rafted sea ice. We consider the consolidation between the layers of both a two-layer and a three-layer section of rafted sea ice. The rafted ice is assumed to be composed of layers of sea ice of equal thickness, separated by thin layers of ocean water. Results show that the thickness of the liquid layer reduced asymptotically with time, such that there always remained a thin saline liquid layer. We propose that when the liquid layer is equal to the surface roughness the adjacent layers can be considered consolidated. Using parameters representative of the north Caspian, the Arctic, and the Antarctic, our results show that for a choice of standard parameters it took under 15 h for two layers of rafted sea ice to consolidate. Sensitivity studies showed that the consolidation model is highly sensitive to the initial thickness of the liquid layer, the fraction of salt release during freezing, and the height of the surface asperities. We believe that further investigation of these parameters is needed before any concrete conclusions can be drawn about rate of consolidation of rafted sea ice features.
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.