56 resultados para growth-survival trade-off
Resumo:
Drosophila melanogaster larvae defend themselves against parasitoid attack via the process of encapsulation. However, flies that successfully defend them selves have reduced fitness as adults. Adults which carry an encapsulated parasitoid egg are smaller and females produce significantly fewer eggs than controls. Capsule-bearing males allowed repeated copulations with females do not show a reduction in their number of offspring, but those allowed to copulate only once did. No differences were found in time to first oviposition in females, or in time to first copulation in males. We interpret the results as arising from a trade-off between investing resources in factors promoting fecundity and mating success, and in defence against parasitism. The outcome of this investment decision influences the strength of selection for defence against parasitism.
Resumo:
The interplay between coevolutionary and population or community dynamics is currently the focus of much empirical and theoretical consideration. Here, we develop a simulation model to study the coevolutionary and population dynamics of a hypothetical host-parasitoid interaction. In the model, host resistance and parasitoid virulence are allowed to coevolve. We investigate how trade-offs associated with these traits modify the system's coevolutionary and population dynamics. The most important influence on these dynamics comes from the incorporation of density-dependent costs of resistance ability. We find three main outcomes. First, if the costs of resistance are high, then one or both of the players go extinct. Second, when the costs of resistance are intermediate to low, cycling population and coevolutionary dynamics are found, with slower evolutionary changes observed when the costs of virulence are also low. Third, when the costs associated with resistance and virulence are both high, the hosts trade-off resistance against fecundity and invest little in resistance. However, the parasitoids continue to invest in virulence, leading to stable host and parasitoid population sizes. These results support the hypothesis that costs associated with resistance and virulence will maintain the heritable variation in these traits found in natural populations and that the nature of these trade-offs will greatly influence the population dynamics of the interacting species.
Resumo:
Typically, the relationship between insect development and temperature is described by two characteristics: the minimum temperature needed for development to occur (T-min) and the number of day degrees required (DDR) for the completion of development. We investigated these characteristics in three English populations of Thrips major and T tabaci [Cawood, Yorkshire (N53degrees49', W1degrees7'); Boxworth, Cambridgeshire (N52degrees15', W0degrees1'); Silwood Park, Berkshire (N51degrees24', W0degrees38')], and two populations of Frankliniella occidentalis (Cawood; Silwood Park). While there were no significant differences among populations in either T-min (mean for T major = 7.0degreesC; T tabaci = 5.9degreesC; F. occidentalis = 6.7degreesC) or DDR (mean for T major = 229.9; T tabaci = 260.8; F occidentalis = 233.4), there were significant differences in the relationship between temperature and body size, suggesting the presence of geographic variation in this trait. Using published data, in addition to those newly collected, we found a negative relationship between T-min. and DDR for F occidentalis and T tabaci, supporting the hypothesis that a trade-off between T-min and DDR may constrain adaptation to local climatic conditions.
Resumo:
There is a growing concern in reducing greenhouse gas emissions all over the world. The U.K. has set 34% target reduction of emission before 2020 and 80% before 2050 compared to 1990 recently in Post Copenhagen Report on Climate Change. In practise, Life Cycle Cost (LCC) and Life Cycle Assessment (LCA) tools have been introduced to construction industry in order to achieve this such as. However, there is clear a disconnection between costs and environmental impacts over the life cycle of a built asset when using these two tools. Besides, the changes in Information and Communication Technologies (ICTs) lead to a change in the way information is represented, in particular, information is being fed more easily and distributed more quickly to different stakeholders by the use of tool such as the Building Information Modelling (BIM), with little consideration on incorporating LCC and LCA and their maximised usage within the BIM environment. The aim of this paper is to propose the development of a model-based LCC and LCA tool in order to provide sustainable building design decisions for clients, architects and quantity surveyors, by then an optimal investment decision can be made by studying the trade-off between costs and environmental impacts. An application framework is also proposed finally as the future work that shows how the proposed model can be incorporated into the BIM environment in practise.
Resumo:
t is well known that when assets are randomly-selected and combined in equal proportions in a portfolio, the risk of the portfolio declines as the number of different assets increases without affecting returns. In other words, increasing portfolio size should improve the risk/return trade-off compared with a portfolio of asset size one. Therefore, diversifying among several property funds may be a better alternative for investors compared to holding only one property fund. Nonetheless, it also well known that with naïve diversification although risk always decreases with portfolio size, it does so at a decreasing rate so that at some point the reduction in portfolio risk, from adding another fund, becomes negligible. Based on this fact, a reasonable question to ask is how much diversification is enough, or in other words, how many property funds should be included in a portfolio to minimise return volatility.
Resumo:
Traditionally, the measure of risk used in portfolio optimisation models is the variance. However, alternative measures of risk have many theoretical and practical advantages and it is peculiar therefore that they are not used more frequently. This may be because of the difficulty in deciding which measure of risk is best and any attempt to compare different risk measures may be a futile exercise until a common risk measure can be identified. To overcome this, another approach is considered, comparing the portfolio holdings produced by different risk measures, rather than the risk return trade-off. In this way we can see whether the risk measures used produce asset allocations that are essentially the same or very different. The results indicate that the portfolio compositions produced by different risk measures vary quite markedly from measure to measure. These findings have a practical consequence for the investor or fund manager because they suggest that the choice of model depends very much on the individual’s attitude to risk rather than any theoretical and/or practical advantages of one model over another.
Resumo:
Several methods for assessing the sustainability of agricultural systems have been developed. These methods do not fully: (i) take into account the multi‐functionality of agriculture; (ii) include multidimensionality; (iii) utilize and implement the assessment knowledge; and (iv) identify conflicting goals and trade‐offs. This paper reviews seven recently developed multidisciplinary indicator‐based assessment methods with respect to their contribution to these shortcomings. All approaches include (1) normative aspects such as goal setting, (2) systemic aspects such as a specification of scale of analysis, (3) a reproducible structure of the approach. The approaches can be categorized into three typologies. The top‐down farm assessments focus on field or farm assessment. They have a clear procedure for measuring the indicators and assessing the sustainability of the system, which allows for benchmarking across farms. The degree of participation is low, potentially affecting the implementation of the results negatively. The top‐down regional assessment assesses the on‐farm and the regional effects. They include some participation to increase acceptance of the results. However, they miss the analysis of potential trade‐offs. The bottom‐up, integrated participatory or transdisciplinary approaches focus on a regional scale. Stakeholders are included throughout the whole process assuring the acceptance of the results and increasing the probability of implementation of developed measures. As they include the interaction between the indicators in their system representation, they allow for performing a trade‐off analysis. The bottom‐up, integrated participatory or transdisciplinary approaches seem to better overcome the four shortcomings mentioned above.
Resumo:
This study focuses on the wealth-protective effects of socially responsible firm behavior by examining the association between corporate social performance (CSP) and financial risk for an extensive panel data sample of S&P 500 companies between the years 1992 and 2009. In addition, the link between CSP and investor utility is investigated. The main findings are that corporate social responsibility is negatively but weakly related to systematic firm risk and that corporate social irresponsibility is positively and strongly related to financial risk. The fact that both conventional and downside risk measures lead to the same conclusions adds convergent validity to the analysis. However, the risk-return trade-off appears to be such that no clear utility gain or loss can be realized by investing in firms characterized by different levels of social and environmental performance. Overall volatility conditions of the financial markets are shown to play a moderating role in the nature and strength of the CSP-risk relationship.
Resumo:
Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.
Resumo:
Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.
Resumo:
Purpose – This paper examines the role of location-specific (L) advantages in the spatial distribution of multinational enterprise (MNE) R&D activity. The meaning of L advantages is revisited. In addition to L advantages that are industry-specific, the paper emphasises that there is an important category of L advantages, referred to as collocation advantages. Design/methodology/approach – Using the OLI framework, this paper highlights that the innovation activities of MNEs are about interaction of these variables, and the essential process of internalising L advantages to enhance and create firm-specific advantages. Findings – Collocation advantages derive from spatial proximity to specific unaffiliated firms, which may be suppliers, competitors, or customers. It is also argued that L advantages are not always public goods, because they may not be available to all firms at a similar or marginal cost. These costs are associated with access and internalisation of L advantages, and – especially in the case of R&D – are attendant with the complexities of embeddedness. Originality/value – The centralisation/decentralisation, spatial separation/collocation debates in R&D location have been mistakenly viewed as a paradox facing firms, instead of as a trade-off that firms must make.
Resumo:
CO, O3, and H2O data in the upper troposphere/lower stratosphere (UTLS) measured by the Atmospheric Chemistry Experiment Fourier Transform Spectrometer(ACE-FTS) on Canada’s SCISAT-1 satellite are validated using aircraft and ozonesonde measurements. In the UTLS, validation of chemical trace gas measurements is a challenging task due to small-scale variability in the tracer fields, strong gradients of the tracers across the tropopause, and scarcity of measurements suitable for validation purposes. Validation based on coincidences therefore suffers from geophysical noise. Two alternative methods for the validation of satellite data are introduced, which avoid the usual need for coincident measurements: tracer-tracer correlations, and vertical tracer profiles relative to tropopause height. Both are increasingly being used for model validation as they strongly suppress geophysical variability and thereby provide an “instantaneous climatology”. This allows comparison of measurements between non-coincident data sets which yields information about the precision and a statistically meaningful error-assessment of the ACE-FTS satellite data in the UTLS. By defining a trade-off factor, we show that the measurement errors can be reduced by including more measurements obtained over a wider longitude range into the comparison, despite the increased geophysical variability. Applying the methods then yields the following upper bounds to the relative differences in the mean found between the ACE-FTS and SPURT aircraft measurements in the upper troposphere (UT) and lower stratosphere (LS), respectively: for CO ±9% and ±12%, for H2O ±30% and ±18%, and for O3 ±25% and ±19%. The relative differences for O3 can be narrowed down by using a larger dataset obtained from ozonesondes, yielding a high bias in the ACEFTS measurements of 18% in the UT and relative differences of ±8% for measurements in the LS. When taking into account the smearing effect of the vertically limited spacing between measurements of the ACE-FTS instrument, the relative differences decrease by 5–15% around the tropopause, suggesting a vertical resolution of the ACE-FTS in the UTLS of around 1 km. The ACE-FTS hence offers unprecedented precision and vertical resolution for a satellite instrument, which will allow a new global perspective on UTLS tracer distributions.
Resumo:
Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.
Resumo:
Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.
Resumo:
Emissions of exhaust gases and particles from oceangoing ships are a significant and growing contributor to the total emissions from the transportation sector. We present an assessment of the contribution of gaseous and particulate emissions from oceangoing shipping to anthropogenic emissions and air quality. We also assess the degradation in human health and climate change created by these emissions. Regulating ship emissions requires comprehensive knowledge of current fuel consumption and emissions, understanding of their impact on atmospheric composition and climate, and projections of potential future evolutions and mitigation options. Nearly 70% of ship emissions occur within 400 km of coastlines, causing air quality problems through the formation of ground-level ozone, sulphur emissions and particulate matter in coastal areas and harbours with heavy traffic. Furthermore, ozone and aerosol precursor emissions as well as their derivative species from ships may be transported in the atmosphere over several hundreds of kilometres, and thus contribute to air quality problems further inland, even though they are emitted at sea. In addition, ship emissions impact climate. Recent studies indicate that the cooling due to altered clouds far outweighs the warming effects from greenhouse gases such as carbon dioxide (CO2) or ozone from shipping, overall causing a negative present-day radiative forcing (RF). Current efforts to reduce sulphur and other pollutants from shipping may modify this. However, given the short residence time of sulphate compared to CO2, the climate response from sulphate is of the order decades while that of CO2 is centuries. The climatic trade-off between positive and negative radiative forcing is still a topic of scientific research, but from what is currently known, a simple cancellation of global mean forcing components is potentially inappropriate and a more comprehensive assessment metric is required. The CO2 equivalent emissions using the global temperature change potential (GTP) metric indicate that after 50 years the net global mean effect of current emissions is close to zero through cancellation of warming by CO2 and cooling by sulphate and nitrogen oxides.