926 resultados para ASSESSMENT MODELS
Resumo:
The 'direct costs' attributable to 30 different endemic diseases of farm animals in Great Britain are estimated using a standardised method to construct a simple model for each disease that includes consideration of disease prevention and treatment costs. The models so far developed provide a basis for further analyses including cost-benefit analyses for the economic assessment of disease control options. The approach used reflects the inherent livestock disease information constraints, which limit the application of other economic analytical methods. It is a practical and transparent approach that is relatively easily communicated to veterinary scientists and policy makers. The next step is to develop the approach by incorporating wider economic considerations into the analyses in a way that will demonstrate to policy makers and others the importance of an economic perspective to livestock disease issues.
Resumo:
It is well established that crop production is inherently vulnerable to variations in the weather and climate. More recently the influence of vegetation on the state of the atmosphere has been recognized. The seasonal growth of crops can influence the atmosphere and have local impacts on the weather, which in turn affects the rate of seasonal crop growth and development. Considering the coupled nature of the crop-climate system, and the fact that a significant proportion of land is devoted to the cultivation of crops, important interactions may be missed when studying crops and the climate system in isolation, particularly in the context of land use and climate change. To represent the two-way interactions between seasonal crop growth and atmospheric variability, we integrate a crop model developed specifically to operate at large spatial scales (General Large Area Model for annual crops) into the land surface component of a global climate model (GCM; HadAM3). In the new coupled crop-climate model, the simulated environment (atmosphere and soil states) influences growth and development of the crop, while simultaneously the temporal variations in crop leaf area and height across its growing season alter the characteristics of the land surface that are important determinants of surface fluxes of heat and moisture, as well as other aspects of the land-surface hydrological cycle. The coupled model realistically simulates the seasonal growth of a summer annual crop in response to the GCM's simulated weather and climate. The model also reproduces the observed relationship between seasonal rainfall and crop yield. The integration of a large-scale single crop model into a GCM, as described here, represents a first step towards the development of fully coupled crop and climate models. Future development priorities and challenges related to coupling crop and climate models are discussed.
Resumo:
This chapter introduces ABMs, their construction, and the pros and cons of their use. Although relatively new, agent-basedmodels (ABMs) have great potential for use in ecotoxicological research – their primary advantage being the realistic simulations that can be constructed and particularly their explicit handling of space and time in simulations. Examples are provided of their use in ecotoxicology primarily exemplified by different implementations of the ALMaSS system. These examples presented demonstrate how multiple stressors, landscape structure, details regarding toxicology, animal behavior, and socioeconomic effects can and should be taken into account when constructing simulations for risk assessment. Like ecological systems, in ABMs the behavior at the system level is not simply the mean of the component responses, but the sum of the often nonlinear interactions between components in the system; hence this modeling approach opens the door to implementing and testing much more realistic and holistic ecotoxicological models than are currently used.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
It is generally acknowledged that population-level assessments provide,I better measure of response to toxicants than assessments of individual-level effects. population-level assessments generally require the use of models to integrate potentially complex data about the effects of toxicants on life-history traits, and to provide a relevant measure of ecological impact. Building on excellent earlier reviews we here briefly outline the modelling options in population-level risk assessment. Modelling is used to calculate population endpoints from available data, which is often about Individual life histories, the ways that individuals interact with each other, the environment and other species, and the ways individuals are affected by pesticides. As population endpoints, we recommend the use of population abundance, population growth rate, and the chance of population persistence. We recommend two types of model: simple life-history models distinguishing two life-history stages, juveniles and adults; and spatially-explicit individual-based landscape models. Life-history models are very quick to set up and run, and they provide a great deal or insight. At the other extreme, individual-based landscape models provide the greatest verisimilitude, albeit at the cost of greatly increased complexity. We conclude with a discussion of the cations of the severe problems of parameterising models.
Resumo:
The purpose of this paper is to present two multi-criteria decision-making models, including an Analytic Hierarchy Process (AHP) model and an Analytic Network Process (ANP) model for the assessment of deconstruction plans and to make a comparison between the two models with an experimental case study. Deconstruction planning is under pressure to reduce operation costs, adverse environmental impacts and duration, in the meanwhile to improve productivity and safety in accordance with structure characteristics, site conditions and past experiences. To achieve these targets in deconstruction projects, there is an impending need to develop a formal procedure for contractors to select a most appropriate deconstruction plan. Because numbers of factors influence the selection of deconstruction techniques, engineers definitely need effective tools to conduct the selection process. In this regard, multi-criteria decision-making methods such as AHP have been adopted to effectively support deconstruction technique selection in previous researches. in which it has been proved that AHP method can help decision-makers to make informed decisions on deconstruction technique selection based on a sound technical framework. In this paper, the authors present the application and comparison of two decision-making models including the AHP model and the ANP model for deconstruction plan assessment. The paper concludes that both AHP and ANP are viable and capable tools for deconstruction plan assessment under the same set of evaluation criteria. However, although the ANP can measure relationship among selection criteria and their sub-criteria, which is normally ignored in the AHP, the authors also indicate that whether the ANP model can provide a more accurate result should be examined in further research.
Resumo:
While building provides shelter for human being, the previous models for assessing the intelligence of a building seldom consider the responses of occupants. In addition, the assessment is usually conducted by an authority organization on a yearly basis, thus can seldom provide timely assistance for facility manager to improve his daily facility maintenance performance. By the extending the law of entropy into the area of intelligent building, this paper demonstrate that both energy consumption and the response of occupants are important when partially assessing the intelligence of a building. This study then develops a sensor based real time building intelligence (BI) assessment model. An experimental case study demonstrates how the model can be implemented. The developed model can address the two demerits of the previous BI assessment model.
Resumo:
Reports that heat processing of foods induces the formation of acrylamide heightened interest in the chemistry, biochemistry, and safety of this compound. Acrylamide-induced neurotoxicity, reproductive toxicity, genotoxicity, and carcinogenicity are potential human health risks based on animal studies. Because exposure of humans to acrylamide can come from both external sources and the diet, there exists a need to develop a better understanding of its formation and distribution in food and its role in human health. To contribute to this effort, experts from eight countries have presented data on the chemistry, analysis, metabolism, pharmacology, and toxicology of acrylamide. Specifically covered are the following aspects: exposure from the environment and the diet; biomarkers of exposure; risk assessment; epidemiology; mechanism of formation in food; biological alkylation of amino acids, peptides, proteins, and DNA by acrylamide and its epoxide metabolite glycidamide; neurotoxicity, reproductive toxicity, and carcinogenicity; protection against adverse effects; and possible approaches to reducing levels in food. Cross-fertilization of ideas among several disciplines in which an interest in acrylamide has developed, including food science, pharmacology, toxicology, and medicine, will provide a better understanding of the chemistry and biology of acrylamide in food, and can lead to the development of food processes to decrease the acrylamide content of the diet.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
A generic model of Exergy Assessment is proposed for the Environmental Impact of the Building Lifecycle, with a special focus on the natural environment. Three environmental impacts: energy consumption, resource consumption and pollutant discharge have been analyzed with reference to energy-embodied exergy, resource chemical exergy and abatement exergy, respectively. The generic model of Exergy Assessment of the Environmental Impact of the Building Lifecycle thus formulated contains two sub-models, one from the aspect of building energy utilization and the other from building materials use. Combined with theories by ecologists such as Odum, the paper evaluates a building's environmental sustainability through its exergy footprint and environmental impacts. A case study from Chongqing, China illustrates the application of this method. From the case study, it was found that energy consumption constitutes 70–80% of the total environmental impact during a 50-year building lifecycle, in which the operation phase accounts for 80% of the total environmental impact, the building material production phase 15% and 5% for the other phases.
Resumo:
Three simple climate models (SCMs) are calibrated using simulations from atmosphere ocean general circulation models (AOGCMs). In addition to using two conventional SCMs, results from a third simpler model developed specifically for this study are obtained. An easy to implement and comprehensive iterative procedure is applied that optimises the SCM emulation of global-mean surface temperature and total ocean heat content, and, if available in the SCM, of surface temperature over land, over the ocean and in both hemispheres, and of the global-mean ocean temperature profile. The method gives best-fit estimates as well as uncertainty intervals for the different SCM parameters. For the calibration, AOGCM simulations with two different types of forcing scenarios are used: pulse forcing simulations performed with 2 AOGCMs and gradually changing forcing simulations from 15 AOGCMs obtained within the framework of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. The method is found to work well. For all possible combinations of SCMs and AOGCMs the emulation of AOGCM results could be improved. The obtained SCM parameters depend both on the AOGCM data and the type of forcing scenario. SCMs with a poor representation of the atmosphere thermal inertia are better able to emulate AOGCM results from gradually changing forcing than from pulse forcing simulations. Correct simultaneous emulation of both atmospheric temperatures and the ocean temperature profile by the SCMs strongly depends on the representation of the temperature gradient between the atmosphere and the mixed layer. Introducing climate sensitivities that are dependent on the forcing mechanism in the SCMs allows the emulation of AOGCM responses to carbon dioxide and solar insolation forcings equally well. Also, some SCM parameters are found to be very insensitive to the fitting, and the reduction of their uncertainty through the fitting procedure is only marginal, while other parameters change considerably. The very simple SCM is found to reproduce the AOGCM results as well as the other two comparably more sophisticated SCMs.
Resumo:
SCIENTIFIC SUMMARY Globally averaged total column ozone has declined over recent decades due to the release of ozone-depleting substances (ODSs) into the atmosphere. Now, as a result of the Montreal Protocol, ozone is expected to recover from the effects of ODSs as ODS abundances decline in the coming decades. However, a number of factors in addition to ODSs have led to and will continue to lead to changes in ozone. Discriminating between the causes of past and projected ozone changes is necessary, not only to identify the progress in ozone recovery from ODSs, but also to evaluate the effectiveness of climate and ozone protection policy options. Factors Affecting Future Ozone and Surface Ultraviolet Radiation • At least for the next few decades, the decline of ODSs is expected to be the major factor affecting the anticipated increase in global total column ozone. However, several factors other than ODS will affect the future evolution of ozone in the stratosphere. These include changes in (i) stratospheric circulation and temperature due to changes in long-lived greenhouse gas (GHG) abundances, (ii) stratospheric aerosol loading, and (iii) source gases of highly reactive stratospheric hydrogen and nitrogen compounds. Factors that amplify the effects of ODSs on ozone (e.g., stratospheric aerosols) will likely decline in importance as ODSs are gradually eliminated from the atmosphere. • Increases in GHG emissions can both positively and negatively affect ozone. Carbon dioxide (CO2)-induced stratospheric cooling elevates middle and upper stratospheric ozone and decreases the time taken for ozone to return to 1980 levels, while projected GHG-induced increases in tropical upwelling decrease ozone in the tropical lower stratosphere and increase ozone in the extratropics. Increases in nitrous oxide (N2O) and methane (CH4) concentrations also directly impact ozone chemistry but the effects are different in different regions. • The Brewer-Dobson circulation (BDC) is projected to strengthen over the 21st century and thereby affect ozone amounts. Climate models consistently predict an acceleration of the BDC or, more specifically, of the upwelling mass flux in the tropical lower stratosphere of around 2% per decade as a consequence of GHG abundance increases. A stronger BDC would decrease the abundance of tropical lower stratospheric ozone, increase poleward transport of ozone, and could reduce the atmospheric lifetimes of long-lived ODSs and other trace gases. While simulations showing faster ascent in the tropical lower stratosphere to date are a robust feature of chemistry-climate models (CCMs), this has not been confirmed by observations and the responsible mechanisms remain unclear. • Substantial ozone losses could occur if stratospheric aerosol loading were to increase in the next few decades, while halogen levels are high. Stratospheric aerosol increases may be caused by sulfur contained in volcanic plumes entering the stratosphere or from human activities. The latter might include attempts to geoengineer the climate system by enhancing the stratospheric aerosol layer. The ozone losses mostly result from enhanced heterogeneous chemistry on stratospheric aerosols. Enhanced aerosol heating within the stratosphere also leads to changes in temperature and circulation that affect ozone. • Surface ultraviolet (UV) levels will not be affected solely by ozone changes but also by the effects of climate change and by air quality change in the troposphere. These tropospheric effects include changes in clouds, tropospheric aerosols, surface reflectivity, and tropospheric sulfur dioxide (SO2) and nitrogen dioxide (NO2). The uncertainties in projections of these factors are large. Projected increases in tropospheric ozone are more certain and may lead to reductions in surface erythemal (“sunburning”) irradiance of up to 10% by 2100. Changes in clouds may lead to decreases or increases in surface erythemal irradiance of up to 15% depending on latitude. Expected Future Changes in Ozone Full ozone recovery from the effects of ODSs and return of ozone to historical levels are not synonymous. In this chapter a key target date is chosen to be 1980, in part to retain the connection to previous Ozone Assessments. Noting, however, that decreases in ozone may have occurred in some regions of the atmosphere prior to 1980, 1960 return dates are also reported. The projections reported on in this chapter are taken from a recent compilation of CCM simulations. The ozone projections, which also form the basis for the UV projections, are limited in their representativeness of possible futures since they mostly come from CCM simulations based on a single GHG emissions scenario (scenario A1B of Emissions Scenarios. A Special Report of Working Group III of the Intergovernmental Panel on Climate Change, Cambridge University Press, 2000) and a single ODS emissions scenario (adjusted A1 of the previous (2006) Ozone Assessment). Throughout this century, the vertical, latitudinal, and seasonal structure of the ozone distribution will be different from what it was in 1980. For this reason, ozone changes in different regions of the atmosphere are considered separately. • The projections of changes in ozone and surface clear-sky UV are broadly consistent with those reported on in the 2006 Assessment. • The capability of making projections and attribution of future ozone changes has been improved since the 2006 Assessment. Use of CCM simulations from an increased number of models extending through the entire period of ozone depletion and recovery from ODSs (1960–2100) as well as sensitivity simulations have allowed more robust projections of long-term changes in the stratosphere and of the relative contributions of ODSs and GHGs to those changes. • Global annually averaged total column ozone is projected to return to 1980 levels before the middle of the century and earlier than when stratospheric halogen loading returns to 1980 levels. CCM projections suggest that this early return is primarily a result of GHG-induced cooling of the upper stratosphere because the effects of circulation changes on tropical and extratropical ozone largely cancel. Global (90°S–90°N) annually averaged total column ozone will likely return to 1980 levels between 2025 and 2040, well before the return of stratospheric halogens to 1980 levels between 2045 and 2060. • Simulated changes in tropical total column ozone from 1960 to 2100 are generally small. The evolution of tropical total column ozone in models depends on the balance between upper stratospheric increases and lower stratospheric decreases. The upper stratospheric increases result from declining ODSs and a slowing of ozone destruction resulting from GHG-induced cooling. Ozone decreases in the lower stratosphere mainly result from an increase in tropical upwelling. From 1960 until around 2000, a general decline is simulated, followed by a gradual increase to values typical of 1980 by midcentury. Thereafter, although total column ozone amounts decline slightly again toward the end of the century, by 2080 they are no longer expected to be affected by ODSs. Confidence in tropical ozone projections is compromised by the fact that simulated decreases in column ozone to date are not supported by observations, suggesting that significant uncertainties remain. • Midlatitude total column ozone is simulated to evolve differently in the two hemispheres. Over northern midlatitudes, annually averaged total column ozone is projected to return to 1980 values between 2015 and 2030, while for southern midlatitudes the return to 1980 values is projected to occur between 2030 and 2040. The more rapid return to 1980 values in northern midlatitudes is linked to a more pronounced strengthening of the poleward transport of ozone due to the effects of increased GHG levels, and effects of Antarctic ozone depletion on southern midlatitudes. By 2100, midlatitude total column ozone is projected to be above 1980 values in both hemispheres. • October-mean Antarctic total column ozone is projected to return to 1980 levels after midcentury, later than in any other region, and yet earlier than when stratospheric halogen loading is projected to return to 1980 levels. The slightly earlier return of ozone to 1980 levels (2045–2060) results primarily from upper stratospheric cooling and resultant increases in ozone. The return of polar halogen loading to 1980 levels (2050–2070) in CCMs is earlier than in empirical models that exclude the effects of GHG-induced changes in circulation. Our confidence in the drivers of changes in Antarctic ozone is higher than for other regions because (i) ODSs exert a strong influence on Antarctic ozone, (ii) the effects of changes in GHG abundances are comparatively small, and (iii) projections of ODS emissions are more certain than those for GHGs. Small Antarctic ozone holes (areas of ozone <220 Dobson units, DU) could persist to the end of the 21st century. • March-mean Arctic total column ozone is projected to return to 1980 levels two to three decades before polar halogen loading returns to 1980 levels, and to exceed 1980 levels thereafter. While CCM simulations project a return to 1980 levels between 2020 and 2035, most models tend not to capture observed low temperatures and thus underestimate present-day Arctic ozone loss such that it is possible that this return date is biased early. Since the strengthening of the Brewer-Dobson circulation through the 21st century leads to increases in springtime Arctic column ozone, by 2100 Arctic ozone is projected to lie well above 1960 levels. Uncertainties in Projections • Conclusions dependent on future GHG levels are less certain than those dependent on future ODS levels since ODS emissions are controlled by the Montreal Protocol. For the six GHG scenarios considered by a few CCMs, the simulated differences in stratospheric column ozone over the second half of the 21st century are largest in the northern midlatitudes and the Arctic, with maximum differences of 20–40 DU between the six scenarios in 2100. • There remain sources of uncertainty in the CCM simulations. These include the use of prescribed ODS mixing ratios instead of emission fluxes as lower boundary conditions, the range of sea surface temperatures and sea ice concentrations, missing tropospheric chemistry, model parameterizations, and model climate sensitivity. • Geoengineering schemes for mitigating climate change by continuous injections of sulfur-containing compounds into the stratosphere, if implemented, would substantially affect stratospheric ozone, particularly in polar regions. Ozone losses observed following large volcanic eruptions support this prediction. However, sporadic volcanic eruptions provide limited analogs to the effects of continuous sulfur emissions. Preliminary model simulations reveal large uncertainties in assessing the effects of continuous sulfur injections. Expected Future Changes in Surface UV. While a number of factors, in addition to ozone, affect surface UV irradiance, the focus in this chapter is on the effects of changes in stratospheric ozone on surface UV. For this reason, clear-sky surface UV irradiance is calculated from ozone projections from CCMs. • Projected increases in midlatitude ozone abundances during the 21st century, in the absence of changes in other factors, in particular clouds, tropospheric aerosols, and air pollutants, will result in decreases in surface UV irradiance. Clear-sky erythemal irradiance is projected to return to 1980 levels on average in 2025 for the northern midlatitudes, and in 2035 for the southern midlatitudes, and to fall well below 1980 values by the second half of the century. However, actual changes in surface UV will be affected by a number of factors other than ozone. • In the absence of changes in other factors, changes in tropical surface UV will be small because changes in tropical total column ozone are projected to be small. By the middle of the 21st century, the model projections suggest surface UV to be slightly higher than in the 1960s, very close to values in 1980, and slightly lower than in 2000. The projected decrease in tropical total column ozone through the latter half of the century will likely result in clear-sky surface UV remaining above 1960 levels. Average UV irradiance is already high in the tropics due to naturally occurring low total ozone columns and high solar elevations. • The magnitude of UV changes in the polar regions is larger than elsewhere because ozone changes in polar regions are larger. For the next decades, surface clear-sky UV irradiance, particularly in the Antarctic, will continue to be higher than in 1980. Future increases in ozone and decreases in clear-sky UV will occur at slower rates than those associated with the ozone decreases and UV increases that occurred before 2000. In Antarctica, surface clear-sky UV is projected to return to 1980 levels between 2040 and 2060, while in the Arctic this is projected to occur between 2020 and 2030. By 2100, October surface clear-sky erythemal irradiance in Antarctica is likely to be between 5% below to 25% above 1960 levels, with considerable uncertainty. This is consistent with multi-model-mean October Antarctic total column ozone not returning to 1960 levels by 2100. In contrast, by 2100, surface clear-sky UV in the Arctic is projected to be 0–10% below 1960 levels.
Resumo:
This paper presents recent developments to a vision-based traffic surveillance system which relies extensively on the use of geometrical and scene context. Firstly, a highly parametrised 3-D model is reported, able to adopt the shape of a wide variety of different classes of vehicle (e.g. cars, vans, buses etc.), and its subsequent specialisation to a generic car class which accounts for commonly encountered types of car (including saloon, batchback and estate cars). Sample data collected from video images, by means of an interactive tool, have been subjected to principal component analysis (PCA) to define a deformable model having 6 degrees of freedom. Secondly, a new pose refinement technique using “active” models is described, able to recover both the pose of a rigid object, and the structure of a deformable model; an assessment of its performance is examined in comparison with previously reported “passive” model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence. Typical applications for this work include robot surveillance and navigation tasks.
Resumo:
The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work
Resumo:
Current measures used to estimate the risks of toxic chemicals are not relevant to the goals of the environmental protection process, and thus ecological risk assessment (ERA) is not used as extensively as it should be as a basis for cost-effective management of environmental resources. Appropriate population models can provide a powerful basis for expressing ecological risks that better inform the environmental management process and thus that are more likely to be used by managers. Here we provide at least five reasons why population modeling should play an important role in bridging the gap between what we measure and what we want to protect. We then describe six actions needed for its implementation into management-relevant ERA.