350 resultados para Input timedelays


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Regional safety program managers face a daunting challenge in the attempt to reduce deaths, injuries, and economic losses that result from motor vehicle crashes. This difficult mission is complicated by the combination of a large perceived need, small budget, and uncertainty about how effective each proposed countermeasure would be if implemented. A manager can turn to the research record for insight, but the measured effect of a single countermeasure often varies widely from study to study and across jurisdictions. The challenge of converting widespread and conflicting research results into a regionally meaningful conclusion can be addressed by incorporating "subjective" information into a Bayesian analysis framework. Engineering evaluations of crashes provide the subjective input on countermeasure effectiveness in the proposed Bayesian analysis framework. Empirical Bayes approaches are widely used in before-and-after studies and "hot-spot" identification; however, in these cases, the prior information was typically obtained from the data (empirically), not subjective sources. The power and advantages of Bayesian methods for assessing countermeasure effectiveness are presented. Also, an engineering evaluation approach developed at the Georgia Institute of Technology is described. Results are presented from an experiment conducted to assess the repeatability and objectivity of subjective engineering evaluations. In particular, the focus is on the importance, methodology, and feasibility of the subjective engineering evaluation for assessing countermeasures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is a major greenhouse gas (GHG) product of intensive agriculture. Fertilizer nitrogen (N) rate is the best single predictor of N2O emissions in row-crop agriculture in the US Midwest. We use this relationship to propose a transparent, scientifically robust protocol that can be utilized by developers of agricultural offset projects for generating fungible GHG emission reduction credits for the emerging US carbon cap and trade market. By coupling predicted N2O flux with the recently developed maximum return to N (MRTN) approach for determining economically profitable N input rates for optimized crop yield, we provide the basis for incentivizing N2O reductions without affecting yields. The protocol, if widely adopted, could reduce N2O from fertilized row-crop agriculture by more than 50%. Although other management and environmental factors can influence N2O emissions, fertilizer N rate can be viewed as a single unambiguous proxy—a transparent, tangible, and readily manageable commodity. Our protocol addresses baseline establishment, additionality, permanence, variability, and leakage, and provides for producers and other stakeholders the economic and environmental incentives necessary for adoption of agricultural N2O reduction offset projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Value Management (VM) has been proven to provide a structured framework, together with other supporting tools and techniques, that facilitate effective decision-making in many types of projects, thus achieving ‘best value’ for clients. One of the major success factors of VM in achieving better project objectives for clients is through the provision of beneficial input by multi-disciplinary team members being involved in critical decision-making discussions during the early stage of construction projects. This paper describes a doctoral research proposal based on the application of VM in design and build construction projects, especially focusing on the design stage. The research aims to study the effects of implementing VM in design and build construction projects, in particular how well the methodology addresses issues related to cost overruns resulting from poor coordination and overlooking of critical constructability issues amongst team members in construction projects in Malaysia. It is proposed that through contractors’ early involvement during the design stage, combined with the use of the VM methodology, particularly as a decision-making tool, better optimization of construction cost can be achieved, thus promoting more efficient and effective constructability. The main methods used in this research involve a thorough literature study, semi-structured interviews, and a survey of major stakeholders, a detailed case study and a VM workshop and focus group discussions involving construction professionals in order to explore and possibly develop a framework and a specific methodology for the facilitating successful application of VM within design and build construction projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The efficiency of agricultural management practices to store SOC depends on C input level and how far a soil is from its saturation level (i.e. saturation deficit). The C Saturation hypothesis suggests an ultimate soil C stabilization capacity defined by four SOM pools capable of C saturation: (1) non-protected, (2) physically protected, (3) chemically protected and (4) biochemically protected. We tested if C saturation deficit and the amount of added C influenced SOC storage in measurable soil fractions corresponding to the conceptual chemical, physical, biochemical, and non-protected C pools. We added two levels of C-13- labeled residue to soil samples from seven agricultural sites that were either closer to (i.e., A-horizon) or further from (i.e., C-horizon) their C saturation level and incubated them for 2.5 years. Residue-derived C stabilization was, in most sites, directly related to C saturation deficit but mechanisms of C stabilization differed between the chemically and biochemically protected pools. The physically protected C pool showed a varied effect of C saturation deficit on C-13 stabilization, due to opposite behavior of the POM and mineral fractions. We found distinct behavior between unaggregated and aggregated mineral-associated fractions emphasizing the mechanistic difference between the chemically and physically protected C-pools. To accurately predict SOC dynamics and stabilization, C Saturation of soil C pools, particularly the chemically and biochemically protected pools, should be considered. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although current assessments of agricultural management practices on soil organic C (SOC) dynamics are usually conducted without any explicit consideration of limits to soil C storage, it has been hypothesized that the SOC pool has an upper, or saturation limit with respect to C input levels at steady state. Agricultural management practices that increase C input levels over time produce a new equilibrium soil C content. However, multiple C input level treatments that produce no increase in SOC stocks at equilibrium show that soils have become saturated with respect to C inputs. SOC storage of added C input is a function of how far a soil is from saturation level (saturation deficit) as well as C input level. We tested experimentally if C saturation deficit and varying C input levels influenced soil C stabilization of added C-13 in soils varying in SOC content and physiochemical characteristics. We incubated for 2.5 years soil samples from seven agricultural sites that were closer to (i.e., A-horizon) or further from (i.e., C-horizon) their C saturation limit. At the initiation of the incubations, samples received low or high C input levels of 13 C-labeled wheat straw. We also tested the effect of Ca addition and residue quality on a subset of these soils. We hypothesized that the proportion of C stabilized would be greater in samples with larger C Saturation deficits (i.e., the C- versus A-horizon samples) and that the relative stabilization efficiency (i.e., Delta SCC/Delta C input) would decrease as C input level increased. We found that C saturation deficit influenced the stabilization of added residue at six out of the seven sites and C addition level affected the stabilization of added residue in four sites, corroborating both hypotheses. Increasing Ca availability or decreasing residue quality had no effect on the stabilization of added residue. The amount of new C stabilized was significantly related to C saturation deficit, supporting the hypothesis that C saturation influenced C stabilization at all our sites. Our results suggest that soils with low C contents and degraded lands may have the greatest potential and efficiency to store added C because they are further from their saturation level. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current estimates of soil C storage potential are based on models or factors that assume linearity between C input levels and C stocks at steady-state, implying that SOC stocks could increase without limit as C input levels increase. However, some soils show little or no increase in steady-state SOC stock with increasing C input levels suggesting that SOC can become saturated with respect to C input. We used long-term field experiment data to assess alternative hypotheses of soil carbon storage by three simple models: a linear model (no saturation), a one-pool whole-soil C saturation model, and a two-pool mixed model with C saturation of a single C pool, but not the whole soil. The one-pool C saturation model best fit the combined data from 14 sites, four individual sites were best-fit with the linear model, and no sites were best fit by the mixed model. These results indicate that existing agricultural field experiments generally have too small a range in C input levels to show saturation behavior, and verify the accepted linear relationship between soil C and C input used to model SOM dynamics. However, all sites combined and the site with the widest range in C input levels were best fit with the C-saturation model. Nevertheless, the same site produced distinct effective stabilization capacity curves rather than an absolute C saturation level. We conclude that the saturation of soil C does occur and therefore the greatest efficiency in soil C sequestration will be in soils further from C saturation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With increasing pressure to deliver environmentally friendly and socially responsible highway infrastructure projects, stakeholders are also putting significant focus on the early identification of financial viability and outcomes for these projects. Infrastructure development typically requires major capital input, which may cause serious financial constraints for investors. The push for sustainability has added new dimensions to the evaluation of highway projects, particularly on the cost front. Comprehensive analysis of the cost implications of implementing place sustainable measures in highway infrastructure throughout its lifespan is highly desirable and will become an essential part of the highway development process and a primary concern for decision makers. This paper discusses an ongoing research which seeks to identify cost elements and issues related to sustainable measures for highway infrastructure projects. Through life-cycle costing analysis (LCCA), financial implications of pursuing sustainability, which are highly concerned by the construction stakeholders, have been assessed to aid the decision making when contemplating the design, development and operation of highway infrastructure. An extensive literature review and evaluation of project reports from previous Australian highway projects was first conducted to reveal all potential cost elements. This provided the foundation for a questionnaire survey, which helped identify those specific issues and related costs that project stakeholders consider to be most critical in the Australian industry context. Through the survey, three key stakeholders in highway infrastructure development, namely consultants, contractors and government agencies, provided their views on the specific selection and priority ranking of the various categories. Findings of the survey are being integrated into proven LCCA models for further enhancement. A new LCCA model will be developed to assist the stakeholders to evaluate costs and investment decisions and reach optimum balance between financial viability and sustainability deliverables.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An algorithm based on the concept of combining Kalman filter and Least Error Square (LES) techniques is proposed in this paper. The algorithm is intended to estimate signal attributes like amplitude, frequency and phase angle in the online mode. This technique can be used in protection relays, digital AVRs, DGs, DSTATCOMs, FACTS and other power electronics applications. The Kalman filter is modified to operate on a fictitious input signal and provides precise estimation results insensitive to noise and other disturbances. At the same time, the LES system has been arranged to operate in critical transient cases to compensate the delay and inaccuracy identified because of the response of the standard Kalman filter. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations and a laboratory test are presented to highlight the usefulness of the proposed method. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The configuration proposed in this paper aims to generate high voltage for pulsed power applications. The main idea is to charge two groups of capacitors in parallel through an inductor and take the advantage of resonant phenomena in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and finally the charged capacitors will be connected together in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulation models of this converter have been investigated in Matlab/SIMULINK platform and the attained results fully satisfy the proper operation of the converter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a widespread recognition to the need of better manage municipal property in most cities in the world. Structural problems across regional, state, and territorial governments that have legal powers to own and maintain real property are similar, regardless of the level of development of each country. Start from a very basic level of property inventory records. The need for better manage to the local government owned property is the result of widespread decentralisation initiatives that often have devolved huge property portfolios from central to local governments almost “overnight”. At the same time municipal or regional governments were and continue to be unprepared to deal with multiple issues related to the role of property owners and managers. The lack of discussion of public asset management especially the elements that should be incorporated in the framework creates an important challenge to study the discipline of public asset management further. The aim of this paper is to study the practices of public asset management in developed countries, especially the elements of public asset management framework, and its transferability to developing countries. A case study was selected and conducted to achieve this aim. They involved interviews and a focus group. The study found that in public asset management framework, proper asset identification, public asset needs analysis, asset life cycle and performance measurements are an important element that should be incorporated in the framework. Those elements are transferable and applicable to developing countries’ local governments. Finally, findings from this study provide useful input for the local government policy makers, scholars and asset management practitioners to establish a public asset management framework toward more efficient and effective local governments in managing their assets as well as increasing public services quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates what happened in one Australian primary school as part of the establishment, use and development of a computer laboratory over a period of two years. As part of a school renewal project, the computer lab was introduced as an ‘innovative’ way to improve the skills of teachers and children in information and communication technologies (ICT) and to lead to curriculum change. However, the way in which the lab was conceptualised and used worked against achieving these goals. The micropolitics of educational change and an input-output understanding of computers meant that change remained structural rather pedagogical or philosophical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traffic control at road junctions is one of the major concerns in most metropolitan cities. Controllers of various approaches are available and the required control action is the effective green-time assigned to each traffic stream within a traffic-light cycle. The application of fuzzy logic provides the controller with the capability to handle uncertain natures of the system, such as drivers’ behaviour and random arrivals of vehicles. When turning traffic is allowed at the junction, the number of phases in the traffic-light cycle increases. The additional input variables inevitably complicate the controller and hence slow down the decision-making process, which is critical in this real-time control problem. In this paper, a hierarchical fuzzy logic controller is proposed to tackle this traffic control problem at a 2-way road junction with turning traffic. The two levels of fuzzy logic controllers devise the minimum effective green-time and fine-tune it respectively at each phase of a traffic-light cycle. The complexity of the controller at each level is reduced with smaller rule-set. The performance of this hierarchical controller is examined by comparison with a fixed-time controller under various traffic conditions. Substantial delay reduction has been achieved as a result and the performance and limitation of the controller will be discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently discovered intrinsically photosensitive melanopsin retinal ganglion cells contribute to the maintenance of pupil diameter, recovery and post-illumination components of the pupillary light reflex and provide the primary environmental light input to the suprachiasmatic nucleus for photoentrainment of the circadian rhythm. This review summarises recent progress in understanding intrinsically photosensitive ganglion cell histology and physiological properties in the context of their contribution to the pupillary and circadian functions and introduces a clinical framework for using the pupillary light reflex to evaluate inner retinal (intrinsically photosensitive melanopsin ganglion cell) and outer retinal (rod and cone photoreceptor) function in the detection of retinal eye disease.