917 resultados para two-factor models
Resumo:
Although the construction pollution index has been put forward and proved to be an efficient approach to reducing or mitigating pollution level during the construction planning stage, the problem of how to select the best construction plan based on distinguishing the degree of its potential adverse environmental impacts is still a research task. This paper first reviews environmental issues and their characteristics in construction, which are critical factors in evaluating potential adverse impacts of a construction plan. These environmental characteristics are then used to structure two decision models for environmental-conscious construction planning by using an analytic network process (ANP), including a complicated model and a simplified model. The two ANP models are combined and called the EnvironalPlanning system, which is applied to evaluate potential adverse environmental impacts of alternative construction plans.
Resumo:
Objective: To assess the effectiveness of absolute risk, relative risk, and number needed to harm formats for medicine side effects, with and without the provision of baseline risk information. Methods: A two factor, risk increase format (relative, absolute and NNH) x baseline (present/absent) between participants design was used. A sample of 268 women was given a scenario about increase in side effect risk with third generation oral contraceptives, and were required to answer written questions to assess their understanding, satisfaction, and likelihood of continuing to take the drug. Results: Provision of baseline information significantly improved risk estimates and increased satisfaction, although the estimates were still considerably higher than the actual risk. No differences between presentation formats were observed when baseline information was presented. Without baseline information, absolute risk led to the most accurate performance. Conclusion: The findings support the importance of informing people about baseline level of risk when describing risk increases. In contrast, they offer no support for using number needed to harm. Practice implications: Health professionals should provide baseline risk information when presenting information about risk increases or decreases. More research is needed before numbers needed to harm (or treat) should be given to members of the general populations. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
A generic model of Exergy Assessment is proposed for the Environmental Impact of the Building Lifecycle, with a special focus on the natural environment. Three environmental impacts: energy consumption, resource consumption and pollutant discharge have been analyzed with reference to energy-embodied exergy, resource chemical exergy and abatement exergy, respectively. The generic model of Exergy Assessment of the Environmental Impact of the Building Lifecycle thus formulated contains two sub-models, one from the aspect of building energy utilization and the other from building materials use. Combined with theories by ecologists such as Odum, the paper evaluates a building's environmental sustainability through its exergy footprint and environmental impacts. A case study from Chongqing, China illustrates the application of this method. From the case study, it was found that energy consumption constitutes 70–80% of the total environmental impact during a 50-year building lifecycle, in which the operation phase accounts for 80% of the total environmental impact, the building material production phase 15% and 5% for the other phases.
Resumo:
The budgets of seven halogenated gases (CFC-11, CFC-12, CFC-113, CFC-114, CFC-115, CCl4 and SF6) are studied by comparing measurements in polar firn air from two Arctic and three Antarctic sites, and simulation results of two numerical models: a 2-D atmospheric chemistry model and a 1-D firn diffusion model. The first one is used to calculate atmospheric concentrations from emission trends based on industrial inventories; the calculated concentration trends are used by the second one to produce depth concentration profiles in the firn. The 2-D atmospheric model is validated in the boundary layer by comparison with atmospheric station measurements, and vertically for CFC-12 by comparison with balloon and FTIR measurements. Firn air measurements provide constraints on historical atmospheric concentrations over the last century. Age distributions in the firn are discussed using a Green function approach. Finally, our results are used as input to a radiative model in order to evaluate the radiative forcing of our target gases. Multi-species and multi-site firn air studies allow to better constrain atmospheric trends. The low concentrations of all studied gases at the bottom of the firn, and their consistency with our model results confirm that their natural sources are small. Our results indicate that the emissions, sinks and trends of CFC-11, CFC-12, CFC-113, CFC-115 and SF6 are well constrained, whereas it is not the case for CFC-114 and CCl4. Significant emission-dependent changes in the lifetimes of halocarbons destroyed in the stratosphere were obtained. Those result from the time needed for their transport from the surface where they are emitted to the stratosphere where they are destroyed. Efforts should be made to update and reduce the large uncertainties on CFC lifetimes.
Resumo:
Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.
Resumo:
This study examines the relation between corporate social performance and stock returns in the UK. We closely evaluate the interactions between social and financial performance with a set of disaggregated social performance indicators for environment, employment, and community activities instead of using an aggregate measure. While scores on a composite social performance indicator are negatively related to stock returns, we find the poor financial reward offered by such firms is attributable to their good social performance on the environment and, to a lesser extent, the community aspects. Considerable abnormal returns are available from holding a portfolio of the socially least desirable stocks. These relationships between social and financial performance can be rationalized by multi-factor models for explaining the cross-sectional variation in returns, but not by industry effects.
A wind-tunnel study of flow distortion at a meteorological sensor on top of the BT Tower, London, UK
Resumo:
High quality wind measurements in cities are needed for numerous applications including wind engineering. Such data-sets are rare and measurement platforms may not be optimal for meteorological observations. Two years' wind data were collected on the BT Tower, London, UK, showing an upward deflection on average for all wind directions. Wind tunnel simulations were performed to investigate flow distortion around two scale models of the Tower. Using a 1:160 scale model it was shown that the Tower causes a small deflection (ca. 0.5°) compared to the lattice on top on which the instruments were placed (ca. 0–4°). These deflections may have been underestimated due to wind tunnel blockage. Using a 1:40 model, the observed flow pattern was consistent with streamwise vortex pairs shed from the upstream lattice edge. Correction factors were derived for different wind directions and reduced deflection in the full-scale data-set by <3°. Instrumental tilt caused a sinusoidal variation in deflection of ca. 2°. The residual deflection (ca. 3°) was attributed to the Tower itself. Correction of the wind-speeds was small (average 1%) therefore it was deduced that flow distortion does not significantly affect the measured wind-speeds and the wind climate statistics are reliable.
Resumo:
In this paper the authors exploit two equivalent formulations of the average rate of material entropy production in the climate system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. This approach is based only on 2D radiative fields at the surface and at the top of atmosphere. Using 2D fields at the top of atmosphere alone, lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle are derived. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamic processes, it is possible to gain insight into a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the Program for Climate Model Diagnosis and Intercomparison (PCMDI)/phase 3 of the Coupled Model Intercomparison Project (CMIP3) dataset in preindustrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large-scale meridional heat transport contributes to only about 10% of the total. This suggests that the traditional two-box models used for providing a minimal representation of entropy production in planetary systems are not appropriate, whereas a basic—but conceptually correct—description can be framed in terms of a four-box model. The total material entropy production is typically 55 mW m−2 K−1, with discrepancies on the order of 5%, and CMs’ baroclinic efficiencies are clustered around 0.055. The lower bounds on the intensity of the Lorenz energy cycle featured by CMs are found to be around 1.0–1.5 W m−2, which implies that the derived inequality is rather stringent. When looking at the variability and covariability of the considered thermodynamic quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain. The contributions to material entropy production from vertical and horizontal processes are positively correlated, so that no compensation mechanism seems in place. Quite consistently among CMs, the variability of the efficiency of the system is a better proxy for variability of the entropy production due to horizontal processes than that of the large-scale heat flux. The possibility of providing constraints on the 3D dynamics of the fluid envelope based only on 2D observations of radiative fluxes seems promising for the observational study of planets and for testing numerical models.
Resumo:
Common approaches to the simulation of borehole heat exchangers (BHEs) assume heat transfer in circulating fluid and grout to be in a quasi-steady state and ignore fluctuations in fluid temperature due to transport of the fluid around the loop. However, in domestic ground source heat pump (GSHP) systems, the heat pump and circulating pumps switch on and off during a given hour; therefore, the effect of the thermal mass of the circulating fluid and the dynamics of fluid transport through the loop has important implications for system design. This may also be important in commercial systems that are used intermittently. This article presents transient simulation of a domestic GSHP system with a single BHE using a dynamic three-dimensional (3D) numerical BHE model. The results show that delayed response associated with the transit of fluid along the pipe loop is of some significance in moderating swings in temperature during heat pump operation. In addition, when 3D effects are considered, a lower heat transfer rate is predicted during steady operations. These effects could be important when considering heat exchanger design and system control. The results will be used to develop refined two-dimensional models.
Resumo:
The formation of complexes appearing in solutions containing oppositely charged polyelectrolytes has been investigated by Monte Carlo simulations using two different models. The polyions are described as flexible chains of 20 connected charged hard spheres immersed in a homogenous dielectric background representing water. The small ions are either explicitly included or their effect described by using a screened Coulomb potential. The simulated solutions contained 10 positively charged polyions with 0, 2, or 5 negatively charged polyions and the respective counterions. Two different linear charge densities were considered, and structure factors, radial distribution functions, and polyion extensions were determined. A redistribution of positively charged polyions involving strong complexes formed between the oppositely charged polyions appeared as the number of negatively charged polyions was increased. The nature of the complexes was found to depend on the linear charge density of the chains. The simplified model involving the screened Coulomb potential gave qualitatively similar results as the model with explicit small ions. Finally, owing to the complex formation, the sampling in configurational space is nontrivial, and the efficiency of different trial moves was examined.
Resumo:
For the first time, vertical column measurements of (HNO3) above the Arctic Stratospheric Ozone Observatory (AStrO) at Eureka (80N, 86W), Canada, have been made during polar night using lunar spectra recorded with a Fourier Transform Infrared (FTIR) spectrometer, from October 2001 to March 2002. AStrO is part of the primary Arctic station of the Network for the Detection of Stratospheric Change (NDSC). These measurements were compared with FTIR measurements at two other NDSC Arctic sites: Thule, Greenland (76.5N, 68.8W) and Kiruna, Sweden (67.8N, 20.4E). The measurements were also compared with two atmospheric models: the Canadian Middle Atmosphere Model (CMAM) and SLIMCAT. This is the first time that CMAM HNO3 columns have been compared with observations in the Arctic. Eureka lunar measurements are in good agreement with solar ones made with the same instrument. Eureka and Thule HNO3 columns are consistent within measurement error. Differences among HNO3 columns measured at Kiruna and those measured at Eureka and Thule can be explained on the basis of the available sunlight hours and the polar vortex location. The comparison of CMAM HNO3 columns with Eureka and Kiruna data shows good agreement, considering CMAM small inter-annual variability. The warm 2001/02 winter with almost no Polar Stratospheric Clouds (PSCs) makes the comparison of the warm climate version of CMAM with these observations a good test for CMAM under no PSC conditions. SLIMCAT captures the magnitude of HNO3 columns at Eureka, and the day-to-day variability, but generally reports higher HNO3 columns than the CMAM climatological mean columns.
Resumo:
The temporal relationship between changes in cerebral blood flow (CBF) and cerebral blood volume (CBV) is important in the biophysical modeling and interpretation of the hemodynamic response to activation, particularly in the context of magnetic resonance imaging and the blood oxygen level-dependent signal. Grubb et al. (1974) measured the steady state relationship between changes in CBV and CBF after hypercapnic challenge. The relationship CBV proportional to CBFPhi has been used extensively in the literature. Two similar models, the Balloon (Buxton et al., 1998) and the Windkessel (Mandeville et al., 1999), have been proposed to describe the temporal dynamics of changes in CBV with respect to changes in CBF. In this study, a dynamic model extending the Windkessel model by incorporating delayed compliance is presented. The extended model is better able to capture the dynamics of CBV changes after changes in CBF, particularly in the return-to-baseline stages of the response.
Resumo:
We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies, such as intercept corrections or differencing, when location shifts occur as in the recent financial crisis.
Resumo:
The warm conveyor belt (WCB) of an extratropical cyclone generally splits into two branches. One branch (WCB1) turns anticyclonically into the downstream upper-level tropospheric ridge, while the second branch (WCB2) wraps cyclonically around the cyclone centre. Here, the WCB split in a typical North Atlantic cold-season cyclone is analysed using two numerical models: the Met Office Unified Model and the COSMO model. The WCB flow is defined using off-line trajectory analysis. The two models represent the WCB split consistently. The split occurs early in the evolution of the WCB with WCB1 experiencing maximum ascent at lower latitudes and with higher moisture content than WCB2. WCB1 ascends abruptly along the cold front where the resolved ascent rates are greatest and there is also line convection. In contrast, WCB2 remains at lower levels for longer before undergoing saturated large-scale ascent over the system's warm front. The greater moisture in WCB1 inflow results in greater net potential temperature change from latent heat release, which determines the final isentropic level of each branch. WCB1 also exhibits lower outflow potential vorticity values than WCB2. Complementary diagnostics in the two models are utilised to study the influence of individual diabatic processes on the WCB. Total diabatic heating rates along the WCB branches are comparable in the two models with microphysical processes in the large-scale cloud schemes being the major contributor to this heating. However, the different convective parameterisation schemes used by the models cause significantly different contributions to the total heating. These results have implications for studies on the influence of the WCB outflow in Rossby wave evolution and breaking. Key aspects are the net potential temperature change and the isentropic level of the outflow which together will influence the relative mass going into each WCB branch and the associated negative PV anomalies at the tropopause-level flow.
Resumo:
The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.