872 resultados para Dynamic Emission Models
Resumo:
This paper focuses on the finite element (FE) response sensitivity and reliability analyses considering smooth constitutive material models. A reinforced concrete frame is modeled for FE sensitivity analysis followed by direct differentiation method under both static and dynamic load cases. Later, the reliability analysis is performed to predict the seismic behavior of the frame. Displacement sensitivity discontinuities are observed along the pseudo-time axis using non-smooth concrete and reinforcing steel model under quasi-static loading. However, the smooth materials show continuity in response sensitivity at elastic to plastic transition points. The normalized sensitivity results are also used to measure the relative importance of the material parameters on the structural responses. In FE reliability analysis, the influence of smoothness behavior of reinforcing steel is carefully noticed. More efficient and reasonable reliability estimation can be achieved by using smooth material model compare with bilinear material constitutive model.
Resumo:
Greenhouse gas (GHG) emissions are simultaneously exhausting the world's supply of fossil fuels and threatening the global climate. In many developing countries, significant improvement in living standards in recent years due to the accelerating development of their economies has resulted in a disproportionate increase in household energy consumption. Therefore, a major reduction in household carbon emissions (HCEs) is essential if global carbon reduction targets are to be met. To do this, major Organisation for Economic Co-operation and Development (OECD) states have already implemented policies to alleviate the negative environmental effects of household behaviors and less carbon-intensive technologies are also proposed to promote energy efficiency and reduce carbon emissions. However, before any further remedial actions can be contemplated, though, it is important to fully understand the actual causes of such large HCEs and help researchers both gain deep insights into the development of the research domain and identify valuable research topics for future study. This paper reviews existing literature focusing on the domain of HCEs. This critical review provides a systematic understanding of current work in the field, describing the factors influencing HCEs under the themes of household income, household size, age, education level, location, gender and rebound effects. The main quantification methodologies of input–output models, life cycle assessment and emission coefficient methods are also presented, and the proposed measures to mitigate HCEs at the policy, technology and consumer levels. Finally, the limitations of work done to date and further research directions are identified for the benefit of future studies.
Resumo:
Human factors such as distraction, fatigue, alcohol and drug use are generally ignored in car-following (CF) models. Such ignorance overestimates driver capability and leads to most CF models’ inability in realistically explaining human driving behaviors. This paper proposes a novel car-following modeling framework by introducing the difficulty of driving task measured as the dynamic interaction between driving task demand and driver capability. Task difficulty is formulated based on the famous Task Capability Interface (TCI) model, which explains the motivations behind driver’s decision making. The proposed method is applied to enhance two popular CF models: Gipps’ model and IDM, and named as TDGipps and TDIDM respectively. The behavioral soundness of TDGipps and TDIDM are discussed and their stabilities are analyzed. Moreover, the enhanced models are calibrated with the vehicle trajectory data, and validated to explain both regular and human factor influenced CF behavior (which is distraction caused by hand-held mobile phone conversation in this paper). Both the models show better performance than their predecessors, especially in presence of human factors.
Resumo:
This thesis presents a novel approach to building large-scale agent-based models of networked physical systems using a compositional approach to provide extensibility and flexibility in building the models and simulations. A software framework (MODAM - MODular Agent-based Model) was implemented for this purpose, and validated through simulations. These simulations allow assessment of the impact of technological change on the electricity distribution network looking at the trajectories of electricity consumption at key locations over many years.
Resumo:
This paper presents the modeling and analysis of a voltage source converter (VSC) based back-to-back (BTB) HVDC link. The case study considers the response to changes in the active and reactive power and disturbance caused by single line to ground (SLG) fault. The controllers at each terminal are designed to inject a variable (magnitude and phase angle) sinusoidal, balanced set of voltages to regulate/control the active and reactive power. It is also possible to regulate the converter bus (AC) voltage by controlling the injected reactive power. The analysis is carried out using both d-q model (neglecting the harmonics in the output voltages of VSC) and three phase detailed model of VSC. While the eigenvalue analysis and controller design is based on the d-q model, the transient simulation considers both models.
Resumo:
Purpose – Business models to date have remained the creation of management, however, it is the belief of the authors that designers should be critically approaching, challenging and creating new business models as part of their practice. This belief portrays a new era where business model constructs become the new design brief of the future and fuel design and innovation to work together at the strategic level of an organisation. Design/methodology/approach – The purpose of this paper is to explore and investigate business model design. The research followed a deductive structured qualitative content analysis approach utilizing a predetermined categorization matrix. The analysis of forty business cases uncovered commonalities of key strategic drivers behind these innovative business models. Findings – Five business model typologies were derived from this content analysis, from which quick prototypes of new business models can be created. Research limitations/implications – Implications from this research suggest there is no “one right” model, but rather through experimentation, the generation of many unique and diverse concepts can result in greater possibilities for future innovation and sustained competitive advantage. Originality/value – This paper builds upon the emerging research and exploration into the importance and relevance of dynamic, design-driven approaches to the creation of innovative business models. These models aim to synthesize knowledge gained from real world examples into a tangible, accessible and provoking framework that provide new prototyping templates to aid the process of business model experimentation.
Resumo:
Timoshenko's shear deformation theory is widely used for the dynamical analysis of shear-flexible beams. This paper presents a comparative study of the shear deformation theory with a higher order model, of which Timoshenko's shear deformation model is a special case. Results indicate that while Timoshenko's shear deformation theory gives reasonably accurate information regarding the set of bending natural frequencies, there are considerable discrepancies in the information it gives regarding the mode shapes and dynamic response, and so there is a need to consider higher order models for the dynamical analysis of flexure of beams.
Resumo:
A fuzzy dynamic flood routing model (FDFRM) for natural channels is presented, wherein the flood wave can be approximated to a monoclinal wave. This study is based on modification of an earlier published work by the same authors, where the nature of the wave was of gravity type. Momentum equation of the dynamic wave model is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. Hence, the FDFRM gets rid of the assumptions associated with the momentum equation. Also, it overcomes the necessity of calculating friction slope (S-f) in flood routing and hence the associated uncertainties are eliminated. The fuzzy rule based model is developed on an equation for wave velocity, which is obtained in terms of discontinuities in the gradient of flow parameters. The channel reach is divided into a number of approximately uniform sub-reaches. Training set required for development of the fuzzy rule based model for each sub-reach is obtained from discharge-area relationship at its mean section. For highly heterogeneous sub-reaches, optimized fuzzy rule based models are obtained by means of a neuro-fuzzy algorithm. For demonstration, the FDFRM is applied to flood routing problems in a fictitious channel with single uniform reach, in a fictitious channel with two uniform sub-reaches and also in a natural channel with a number of approximately uniform sub-reaches. It is observed that in cases of the fictitious channels, the FDFRM outputs match well with those of an implicit numerical model (INM), which solves the dynamic wave equations using an implicit numerical scheme. For the natural channel, the FDFRM Outputs are comparable to those of the HEC-RAS model.
Resumo:
The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting tor energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications.
Resumo:
With the rapid development of various technologies and applications in smart grid implementation, demand response has attracted growing research interests because of its potentials in enhancing power grid reliability with reduced system operation costs. This paper presents a new demand response model with elastic economic dispatch in a locational marginal pricing market. It models system economic dispatch as a feedback control process, and introduces a flexible and adjustable load cost as a controlled signal to adjust demand response. Compared with the conventional “one time use” static load dispatch model, this dynamic feedback demand response model may adjust the load to a desired level in a finite number of time steps and a proof of convergence is provided. In addition, Monte Carlo simulation and boundary calculation using interval mathematics are applied for describing uncertainty of end-user's response to an independent system operator's expected dispatch. A numerical analysis based on the modified Pennsylvania-Jersey-Maryland power pool five-bus system is introduced for simulation and the results verify the effectiveness of the proposed model. System operators may use the proposed model to obtain insights in demand response processes for their decision-making regarding system load levels and operation conditions.
Resumo:
We compared daily net radiation (Rn) estimates from 19 methods with the ASCE-EWRI Rn estimates in two climates: Clay Center, Nebraska (sub-humid) and Davis, California (semi-arid) for the calendar year. The performances of all 20 methods, including the ASCE-EWRI Rn method, were then evaluated against Rn data measured over a non-stressed maize canopy during two growing seasons in 2005 and 2006 at Clay Center. Methods differ in terms of inputs, structure, and equation intricacy. Most methods differ in estimating the cloudiness factor, emissivity (e), and calculating net longwave radiation (Rnl). All methods use albedo (a) of 0.23 for a reference grass/alfalfa surface. When comparing the performance of all 20 Rn methods with measured Rn, we hypothesized that the a values for grass/alfalfa and non-stressed maize canopy were similar enough to only cause minor differences in Rn and grass- and alfalfa-reference evapotranspiration (ETo and ETr) estimates. The measured seasonal average a for the maize canopy was 0.19 in both years. Using a = 0.19 instead of a = 0.23 resulted in 6% overestimation of Rn. Using a = 0.19 instead of a = 0.23 for ETo and ETr estimations, the 6% difference in Rn translated to only 4% and 3% differences in ETo and ETr, respectively, supporting the validity of our hypothesis. Most methods had good correlations with the ASCE-EWRI Rn (r2 > 0.95). The root mean square difference (RMSD) was less than 2 MJ m-2 d-1 between 12 methods and the ASCE-EWRI Rn at Clay Center and between 14 methods and the ASCE-EWRI Rn at Davis. The performance of some methods showed variations between the two climates. In general, r2 values were higher for the semi-arid climate than for the sub-humid climate. Methods that use dynamic e as a function of mean air temperature performed better in both climates than those that calculate e using actual vapor pressure. The ASCE-EWRI-estimated Rn values had one of the best agreements with the measured Rn (r2 = 0.93, RMSD = 1.44 MJ m-2 d-1), and estimates were within 7% of the measured Rn. The Rn estimates from six methods, including the ASCE-EWRI, were not significantly different from measured Rn. Most methods underestimated measured Rn by 6% to 23%. Some of the differences between measured and estimated Rn were attributed to the poor estimation of Rnl. We conducted sensitivity analyses to evaluate the effect of Rnl on Rn, ETo, and ETr. The Rnl effect on Rn was linear and strong, but its effect on ETo and ETr was subsidiary. Results suggest that the Rn data measured over green vegetation (e.g., irrigated maize canopy) can be an alternative Rn data source for ET estimations when measured Rn data over the reference surface are not available. In the absence of measured Rn, another alternative would be using one of the Rn models that we analyzed when all the input variables are not available to solve the ASCE-EWRI Rn equation. Our results can be used to provide practical information on which method to select based on data availability for reliable estimates of daily Rn in climates similar to Clay Center and Davis.
Resumo:
An important safety aspect to be considered when foods are enriched with phytosterols and phytostanols is the oxidative stability of these lipid compounds, i.e. their resistance to oxidation and thus to the formation of oxidation products. This study concentrated on producing scientific data to support this safety evaluation process. In the absence of an official method for analyzing of phytosterol/stanol oxidation products, we first developed a new gas chromatographic - mass spectrometric (GC-MS) method. We then investigated factors affecting these compounds' oxidative stability in lipid-based food models in order to identify critical conditions under which significant oxidation reactions may occur. Finally, the oxidative stability of phytosterols and stanols in enriched foods during processing and storage was evaluated. Enriched foods covered a range of commercially available phytosterol/stanol ingredients, different heat treatments during food processing, and different multiphase food structures. The GC-MS method was a powerful tool for measuring the oxidative stability. Data obtained in food model studies revealed that the critical factors for the formation and distribution of the main secondary oxidation products were sterol structure, reaction temperature, reaction time, and lipid matrix composition. Under all conditions studied, phytostanols as saturated compounds were more stable than unsaturated phytosterols. In addition, esterification made phytosterols more reactive than free sterols at low temperatures, while at high temperatures the situation was the reverse. Generally, oxidation reactions were more significant at temperatures above 100°C. At lower temperatures, the significance of these reactions increased with increasing reaction time. The effect of lipid matrix composition was dependent on temperature; at temperatures above 140°C, phytosterols were more stable in an unsaturated lipid matrix, whereas below 140°C they were more stable in a saturated lipid matrix. At 140°C, phytosterols oxidized at the same rate in both matrices. Regardless of temperature, phytostanols oxidized more in an unsaturated lipid matrix. Generally, the distribution of oxidation products seemed to be associated with the phase of overall oxidation. 7-ketophytosterols accumulated when oxidation had not yet reached the dynamic state. Once this state was attained, the major products were 5,6-epoxyphytosterols and 7-hydroxyphytosterols. The changes observed in phytostanol oxidation products were not as informative since all stanol oxides quantified represented hydroxyl compounds. The formation of these secondary oxidation products did not account for all of the phytosterol/stanol losses observed during the heating experiments, indicating the presence of dimeric, oligomeric or other oxidation products, especially when free phytosterols and stanols were heated at high temperatures. Commercially available phytosterol/stanol ingredients were stable during such food processes as spray-drying and ultra high temperature (UHT)-type heating and subsequent long-term storage. Pan-frying, however, induced phytosterol oxidation and was classified as a rather deteriorative process. Overall, the findings indicated that although phytosterols and stanols are stable in normal food processing conditions, attention should be paid to their use in frying. Complex interactions between other food constituents also suggested that when new phytosterol-enriched foods are developed their oxidative stability must first be established. The results presented here will assist in choosing safe conditions for phytosterol/stanol enrichment.
Resumo:
Climate projections over the next two to four decades indicate that most of Australia’s wheat-belt is likely to become warmer and drier. Here we used a shire scale, dynamic stress-index model that accounts for the impacts of rainfall and temperature on wheat yield, and a range of climate change projections from global circulation models to spatially estimate yield changes assuming no adaptation and no CO2 fertilisation effects. We modelled five scenarios, a baseline climate (climatology, 1901–2007), and two emission scenarios (“low” and “high” CO2) for two time horizons, namely 2020 and 2050. The potential benefits from CO2 fertilisation were analysed separately using a point level functional simulation model. Irrespective of the emissions scenario, the 2020 projection showed negligible changes in the modelled yield relative to baseline climate, both using the shire or functional point scale models. For the 2050-high emissions scenario, changes in modelled yield relative to the baseline ranged from −5 % to +6 % across most of Western Australia, parts of Victoria and southern New South Wales, and from −5 to −30 % in northern NSW, Queensland and the drier environments of Victoria, South Australia and in-land Western Australia. Taking into account CO2 fertilisation effects across a North–south transect through eastern Australia cancelled most of the yield reductions associated with increased temperatures and reduced rainfall by 2020, and attenuated the expected yield reductions by 2050.
Resumo:
Positron emission tomography (PET) is an imaging technique in which radioactive positron-emitting tracers are used to study biochemical and physiological functions in humans and in animal experiments. The use of PET imaging has increased rapidly in recent years, as have special requirements in the fields of neurology and oncology for the development of syntheses for new, more specific and selective radiotracers. Synthesis development and automation are necessary when high amounts of radioactivity are needed for multiple PET studies. In addition, preclinical studies using experimental animal models are necessary for evaluating the suitability of new PET tracers for humans. For purification and analysing the labelled end-product, an effective radioanalytical method combined with an optimal radioactivity detection technique is of great importance. In this study, a fluorine-18 labelling synthesis method for two tracers was developed and optimized, and the usefulness of these tracers for possible prospective human studies was evaluated. N-(3-[18F]fluoropropyl)-2β-carbomethoxy-3β-(4-fluorophenyl)nortropane ([18F]β-CFT-FP) is a candidate PET tracer for the dopamine transporter (DAT), and 1H-1-(3-[18F]fluoro-2-hydroxypropyl)-2-nitroimidazole ([18F]FMISO) is a well-known hypoxia marker for hypoxic but viable cells in tumours. The methodological aim of this thesis was to evaluate the status of thin-layer chromatography (TLC) combined with proper radioactivity detection measurement systems as a radioanalytical method. Three different detection methods of radioactivity were compared: radioactivity scanning, film autoradiography, and digital photostimulated luminescence (PSL) autoradiography. The fluorine-18 labelling synthesis for [18F]β-CFT-FP was developed and carbon-11 labelled [11C]β-CFT-FP was used to study the specificity of β-CFT-FP for the DAT sites in human post-mortem brain slices. These in vitro studies showed that β-CFT-FP binds to the caudate-putamen, an area rich of DAT. The synthesis of fluorine-18 labelled [18F]FMISO was optimized, and the tracer was prepared using an automated system with good and reproducible yields. In preclinical studies, the action of the radiation sensitizer estramustine phosphate on the radiation treatment and uptake of [18F]FMISO was evaluated, with results of great importance for later human studies. The methodological part of this thesis showed that radioTLC is the method of choice when combined with an appropriate radioactivity detection technique. Digital PSL autoradiography proved to be the most appropriate when compared to the radioactivity scanning and film autoradiography methods. The very high sensitivity, good resolution, and wide dynamic range of digital PSL autoradiography are its advantages in detection of β-emitting radiolabelled substances.
Resumo:
AbstractObjectives Decision support tools (DSTs) for invasive species management have had limited success in producing convincing results and meeting users' expectations. The problems could be linked to the functional form of model which represents the dynamic relationship between the invasive species and crop yield loss in the DSTs. The objectives of this study were: a) to compile and review the models tested on field experiments and applied to DSTs; and b) to do an empirical evaluation of some popular models and alternatives. Design and methods This study surveyed the literature and documented strengths and weaknesses of the functional forms of yield loss models. Some widely used models (linear, relative yield and hyperbolic models) and two potentially useful models (the double-scaled and density-scaled models) were evaluated for a wide range of weed densities, maximum potential yield loss and maximum yield loss per weed. Results Popular functional forms include hyperbolic, sigmoid, linear, quadratic and inverse models. Many basic models were modified to account for the effect of important factors (weather, tillage and growth stage of crop at weed emergence) influencing weed–crop interaction and to improve prediction accuracy. This limited their applicability for use in DSTs as they became less generalized in nature and often were applicable to a much narrower range of conditions than would be encountered in the use of DSTs. These factors' effects could be better accounted by using other techniques. Among the model empirically assessed, the linear model is a very simple model which appears to work well at sparse weed densities, but it produces unrealistic behaviour at high densities. The relative-yield model exhibits expected behaviour at high densities and high levels of maximum yield loss per weed but probably underestimates yield loss at low to intermediate densities. The hyperbolic model demonstrated reasonable behaviour at lower weed densities, but produced biologically unreasonable behaviour at low rates of loss per weed and high yield loss at the maximum weed density. The density-scaled model is not sensitive to the yield loss at maximum weed density in terms of the number of weeds that will produce a certain proportion of that maximum yield loss. The double-scaled model appeared to produce more robust estimates of the impact of weeds under a wide range of conditions. Conclusions Previously tested functional forms exhibit problems for use in DSTs for crop yield loss modelling. Of the models evaluated, the double-scaled model exhibits desirable qualitative behaviour under most circumstances.