17 resultados para Fonction cumulative

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this document is to provide a single source of reference for every paper published in the journals directly related to research in Construction Management. It is indexed by author and keyword and contains the titles, authors, abstracts and keywords of every article from the following journals: • Building Research and Information (BRI) • Construction Management and Economics (CME) • Engineering, Construction and Architectural Management (ECAM) • Journal of Construction Procurement (JCP) • Journal of Construction Research (JCR) • Journal of Financial Management in Property and Construction (JFM) • RICS Research Papers (RICS) The index entries give short forms of the bibliographical citations, rather than page numbers, to enable annual updates to the abstracts. Each annual update will carry cumulative indexes, so that only one index needs to be consulted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this document is to provide a single source of reference for every paper published in the journals directly related to research in Construction Management. This volume brings together articles published during 1999. It is indexed by author and keyword and contains the titles, authors, abstracts and keywords of every article from the following journals: • Building Research and Information (BRI) • Construction Management and Economics (CME) • Engineering, Construction and Architectural Management (ECAM) • Journal of Construction Procurement (JCP) • RICS Research Papers (RICS) The index entries give short forms of the bibliographical citations, rather than page numbers, to enable rapid reference to articles. A cumulative volume is available from the editor. Included in this volume is an appendix listing a wide range of journals associated with construction management research, giving details of frequency, editorial addresses and web sites, as well as whether each journal is international and/or refereed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methodology used to measure in vitro gas production is reviewed to determine impacts of sources of variation on resultant gas production profiles (GPP). Current methods include measurement of gas production at constant pressure (e.g., use of gas tight syringes), a system that is inexpensive, but may be less sensitive than others thereby affecting its suitability in some situations. Automated systems that measure gas production at constant volume allow pressure to accumulate in the bottle, which is recorded at different times to produce a GPP, and may result in sufficiently high pressure that solubility of evolved gases in the medium is affected, thereby resulting in a recorded volume of gas that is lower than that predicted from stoichiometric calculations. Several other methods measure gas production at constant pressure and volume with either pressure transducers or sensors, and these may be manual, semi-automated or fully automated in operation. In these systems, gas is released as pressure increases, and vented gas is recorded. Agitating the medium does not consistently produce more gas with automated systems, and little or no effect of agitation was observed with manual systems. The apparatus affects GPP, but mathematical manipulation may enable effects of apparatus to be removed. The amount of substrate affects the volume of gas produced, but not rate of gas production, provided there is sufficient buffering capacity in the medium. Systems that use a very small amount of substrate are prone to experimental error in sample weighing. Effect of sample preparation on GPP has been found to be important, but further research is required to determine the optimum preparation that mimics animal chewing. Inoculum is the single largest source of variation in measuring GPP, as rumen fluid is variable and sampling schedules, diets fed to donor animals and ratios of rumen fluid/medium must be selected such that microbial activity is sufficiently high that it does not affect rate and extent of fermentation. Species of donor animal may also cause differences in GPP. End point measures can be mathematically manipulated to account for species differences, but rates of fermentation are not related. Other sources of inocula that have been used include caecal fluid (primarily for investigating hindgut fermentation in monogastrics), effluent from simulated rumen fermentation (e.g., 'Rusitec', which was as variable as rumen fluid), faeces, and frozen or freeze-dried rumen fluid (which were both less active than fresh rumen fluid). Use of mixtures of cell-free enzymes, or pure cultures of bacteria, may be a way of increasing GPP reproducibility, while reducing reliance on surgically modified animals. However, more research is required to develop these inocula. A number of media have been developed which buffer the incubation and provide relevant micro-nutrients to the microorganisms. To date, little research has been completed on relationships between the composition of the medium and measured GPP. However, comparing GPP from media either rich in N or N-free, allows assessment of contributions of N containing compounds in the sample. (c) 2005 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This investigation deals with the question of when a particular population can be considered to be disease-free. The motivation is the case of BSE where specific birth cohorts may present distinct disease-free subpopulations. The specific objective is to develop a statistical approach suitable for documenting freedom of disease, in particular, freedom from BSE in birth cohorts. The approach is based upon a geometric waiting time distribution for the occurrence of positive surveillance results and formalizes the relationship between design prevalence, cumulative sample size and statistical power. The simple geometric waiting time model is further modified to account for the diagnostic sensitivity and specificity associated with the detection of disease. This is exemplified for BSE using two different models for the diagnostic sensitivity. The model is furthermore modified in such a way that a set of different values for the design prevalence in the surveillance streams can be accommodated (prevalence heterogeneity) and a general expression for the power function is developed. For illustration, numerical results for BSE suggest that currently (data status September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Substantial resources are used for surveillance of bovine spongiform encephalopathy (BSE) despite an extremely low detection rate, especially in healthy slaughtered cattle. We have developed a method based on the geometric waiting time distribution to establish and update the statistical evidence for BSE-freedom for defined birth cohorts using continued surveillance data. The results suggest that currently (data included till September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity. These results apply to an assumed design prevalence of 1 in 10,000 and account for prevalence heterogeneity. The age-dependent, diagnostic sensitivity for the detection of BSE has been identified as major determinant of the power. The incorporation of heterogeneity was deemed adequate on scientific grounds and led to improved power values. We propose our model as a decision tool for possible future modification of the BSE surveillance and discuss public health and international trade implications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two experiments implement and evaluate a training scheme for learning to apply frequency formats to probability judgements couched in terms of percentages. Results indicate that both conditional and cumulative probability judgements can be improved in this manner, however the scheme is insufficient to promote any deeper understanding of the problem structure. In both experiments, training on one problem type only (either conditional or cumulative risk judgements) resulted in an inappropriate transfer of a learned method at test. The obstacles facing a frequency-based training programme for teaching appropriate use of probability data are discussed. Copyright (c) 2006 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tendency to neglect base-rates in judgment under uncertainty may be "notorious," as Barbey & Sloman (B&S) suggest, but it is neither inevitable (as they document; see also Koehler 1996) nor unique. Here we would like to point out another line of evidence connecting ecological rationality to dual processes, the failure of individuals to appropriately judge cumulative probability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper summarizes the theory of simple cumulative risks—for example, the risk of food poisoning from the consumption of a series of portions of tainted food. Problems concerning such risks are extraordinarily difficult for naı¨ve individuals, and the paper explains the reasons for this difficulty. It describes how naı¨ve individuals usually attempt to estimate cumulative risks, and it outlines a computer program that models these methods. This account predicts that estimates can be improved if problems of cumulative risk are framed so that individuals can focus on the appropriate subset of cases. The paper reports two experiments that corroborated this prediction. They also showed that whether problems are stated in terms of frequencies (80 out of 100 people got food poisoning) or in terms of percentages (80% of people got food poisoning) did not reliably affect accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A common procedure for studying the effects on cognition of repetitive transcranial magnetic stimulation (rTMS) is to deliver rTMS concurrent with task performance, and to compare task performance on these trials versus on trials without rTMS. Recent evidence that TMS can have effects on neural activity that persist longer than the experimental session itself, however, raise questions about the assumption of the transient nature of rTMS that underlies many concurrent (or "online") rTMS designs. To our knowledge, there have been no studies in the cognitive domain examining whether the application of brief trains of rTMS during specific epochs of a complex task may have effects that spill over into subsequent task epochs, and perhaps into subsequent trials. We looked for possible immediate spill-over and longer-term cumulative effects of rTMS in data from two studies of visual short-term delayed recognition. In 54 subjects, 10-Hz rTMS trains were applied to five different brain regions during the 3-s delay period of a spatial task, and in a second group of 15 subjects, electroencephalography (EEG) was recorded while 10-Hz rTMS was applied to two brain areas during the 3-s delay period of both spatial and object tasks. No evidence for immediate effects was found in the comparison of the memory probe-evoked response on trials that were vs. were not preceded by delay-period rTMS. No evidence for cumulative effects was found in analyses of behavioral performance, and of EEG signal, as a function of task block. The implications of these findings, and their relation to the broader literature on acute vs. long-lasting effects of rTMS, are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The global temperature response to increasing atmospheric CO2 is often quantified by metrics such as equilibrium climate sensitivity and transient climate response1. These approaches, however, do not account for carbon cycle feedbacks and therefore do not fully represent the net response of the Earth system to anthropogenic CO2 emissions. Climate–carbon modelling experiments have shown that: (1) the warming per unit CO2 emitted does not depend on the background CO2 concentration2; (2) the total allowable emissions for climate stabilization do not depend on the timing of those emissions3, 4, 5; and (3) the temperature response to a pulse of CO2 is approximately constant on timescales of decades to centuries3, 6, 7, 8. Here we generalize these results and show that the carbon–climate response (CCR), defined as the ratio of temperature change to cumulative carbon emissions, is approximately independent of both the atmospheric CO2 concentration and its rate of change on these timescales. From observational constraints, we estimate CCR to be in the range 1.0–2.1 °C per trillion tonnes of carbon (Tt C) emitted (5th to 95th percentiles), consistent with twenty-first-century CCR values simulated by climate–carbon models. Uncertainty in land-use CO2 emissions and aerosol forcing, however, means that higher observationally constrained values cannot be excluded. The CCR, when evaluated from climate–carbon models under idealized conditions, represents a simple yet robust metric for comparing models, which aggregates both climate feedbacks and carbon cycle feedbacks. CCR is also likely to be a useful concept for climate change mitigation and policy; by combining the uncertainties associated with climate sensitivity, carbon sinks and climate–carbon feedbacks into a single quantity, the CCR allows CO2-induced global mean temperature change to be inferred directly from cumulative carbon emissions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Global efforts to mitigate climate change are guided by projections of future temperatures1. But the eventual equilibrium global mean temperature associated with a given stabilization level of atmospheric greenhouse gas concentrations remains uncertain1, 2, 3, complicating the setting of stabilization targets to avoid potentially dangerous levels of global warming4, 5, 6, 7, 8. Similar problems apply to the carbon cycle: observations currently provide only a weak constraint on the response to future emissions9, 10, 11. Here we use ensemble simulations of simple climate-carbon-cycle models constrained by observations and projections from more comprehensive models to simulate the temperature response to a broad range of carbon dioxide emission pathways. We find that the peak warming caused by a given cumulative carbon dioxide emission is better constrained than the warming response to a stabilization scenario. Furthermore, the relationship between cumulative emissions and peak warming is remarkably insensitive to the emission pathway (timing of emissions or peak emission rate). Hence policy targets based on limiting cumulative emissions of carbon dioxide are likely to be more robust to scientific uncertainty than emission-rate or concentration targets. Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO2), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide-induced warming of 2 °C above pre-industrial temperatures, with a 5–95% confidence interval of 1.3–3.9 °C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper evaluates environmental externality when the structure of the externality is cumulative. The evaluation exercise is based on the assumption that the agents in question form conjectural variations. A number of environments are encompassed within this classification and have received due attention in the literature. Each of these heterogeneous environments, however, possesses considerable analytical homogeneity and permit subscription to a general model treatment. These environments include environmental externality, oligopoly and the analysis of the private provision of public goods. We highlight the general analytical approach by focusing on this latter context, in which debate centers around four issues: the existence of free-riding, the extent to which contributions are matched equally across individuals, the nature of conjectures consistent with equilibrium, and the allocative inefficiency of alternative regimes. This paper resolves each of these issues, with the following conclusions: A consistent-conjectures equilibrium exists in the private provision of public goods. It is the monopolistic-conjectures equilibrium. Agents act identically, contributing positive amounts of the public good in an efficient allocation of resources. There is complete matching of contributions among agents, no free-riding, and the allocation is independent of the number of members within the community. Thus the Olson conjecture—that inefficiency is exacerbated by community size—has no foundation in a consistent-conjectures, cumulative-externality, context (212 words).