885 resultados para Evolutionary constraints
Resumo:
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000- fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. Keywords: haldanes, biological time, scaling, pedomorphosis
Resumo:
Evolutionary meta-algorithms for pulse shaping of broadband femtosecond duration laser pulses are proposed. The genetic algorithm searching the evolutionary landscape for desired pulse shapes consists of a population of waveforms (genes), each made from two concatenated vectors, specifying phases and magnitudes, respectively, over a range of frequencies. Frequency domain operators such as mutation, two-point crossover average crossover, polynomial phase mutation, creep and three-point smoothing as well as a time-domain crossover are combined to produce fitter offsprings at each iteration step. The algorithm applies roulette wheel selection; elitists and linear fitness scaling to the gene population. A differential evolution (DE) operator that provides a source of directed mutation and new wavelet operators are proposed. Using properly tuned parameters for DE, the meta-algorithm is used to solve a waveform matching problem. Tuning allows either a greedy directed search near the best known solution or a robust search across the entire parameter space.
Resumo:
Healthcare information systems have the potential to enhance productivity, lower costs, and reduce medication errors by automating business processes. However, various issues such as system complexity and system abilities in a relation to user requirements as well as rapid changes in business needs have an impact on the use of these systems. In many cases failure of a system to meet business process needs has pushed users to develop alternative work processes (workarounds) to fill this gap. Some research has been undertaken on why users are motivated to perform and create workarounds. However, very little research has assessed the consequences on patient safety. Moreover, the impact of performing these workarounds on the organisation and how to quantify risks and benefits is not well analysed. Generally, there is a lack of rigorous understanding and qualitative and quantitative studies on healthcare IS workarounds and their outcomes. This project applies A Normative Approach for Modelling Workarounds to develop A Model of Motivation, Constraints, and Consequences. It aims to understand the phenomenon in-depth and provide guidelines to organisations on how to deal with workarounds. Finally the method is demonstrated on a case study example and its relative merits discussed.
Resumo:
Parameterization schemes for the drag due to atmospheric gravity waves are discussed and compared in the context of a simple one-dimensional model of the quasi-biennial oscillation (QBO). A number of fundamental issues are examined in detail, with the goal of providing a better understanding of the mechanism by which gravity wave drag can produce an equatorial zonal wind oscillation. The gravity wave–driven QBOs are compared with those obtained from a parameterization of equatorial planetary waves. In all gravity wave cases, it is seen that the inclusion of vertical diffusion is crucial for the descent of the shear zones and the development of the QBO. An important difference between the schemes for the two types of waves is that in the case of equatorial planetary waves, vertical diffusion is needed only at the lowest levels, while for the gravity wave drag schemes it must be included at all levels. The question of whether there is downward propagation of influence in the simulated QBOs is addressed. In the gravity wave drag schemes, the evolution of the wind at a given level depends on the wind above, as well as on the wind below. This is in contrast to the parameterization for the equatorial planetary waves in which there is downward propagation of phase only. The stability of a zero-wind initial state is examined, and it is determined that a small perturbation to such a state will amplify with time to the extent that a zonal wind oscillation is permitted.
Resumo:
This study examines the effect of combining equatorial planetary wave drag and gravity wave drag in a one-dimensional zonal mean model of the quasi-biennial oscillation (QBO). Several different combinations of planetary wave and gravity wave drag schemes are considered in the investigations, with the aim being to assess which aspects of the different schemes affect the nature of the modeled QBO. Results show that it is possible to generate a realistic-looking QBO with various combinations of drag from the two types of waves, but there are some constraints on the wave input spectra and amplitudes. For example, if the phase speeds of the gravity waves in the input spectrum are large relative to those of the equatorial planetary waves, critical level absorption of the equatorial planetary waves may occur. The resulting mean-wind oscillation, in that case, is driven almost exclusively by the gravity wave drag, with only a small contribution from the planetary waves at low levels. With an appropriate choice of wave input parameters, it is possible to obtain a QBO with a realistic period and to which both types of waves contribute. This is the regime in which the terrestrial QBO appears to reside. There may also be constraints on the initial strength of the wind shear, and these are similar to the constraints that apply when gravity wave drag is used without any planetary wave drag. In recent years, it has been observed that, in order to simulate the QBO accurately, general circulation models require parameterized gravity wave drag, in addition to the drag from resolved planetary-scale waves, and that even if the planetary wave amplitudes are incorrect, the gravity wave drag can be adjusted to compensate. This study provides a basis for knowing that such a compensation is possible.
Resumo:
A theory of available potential energy (APE) for symmetric circulations, which includes momentum constraints, is presented. The theory is a generalization of the classical theory of APE, which includes only thermal constraints on the circulation. Physically, centrifugal potential energy is included along with gravitational potential energy. The generalization relies on the Hamiltonian structure of the conservative dynamics, although (as with classical APE) it still defines the energetics in a nonconservative framework. It follows that the theory is exact at finite amplitude, has a local form, and can be applied to a variety of fluid models. It is applied here to the f -plane Boussinesq equations. It is shown that, by including momentum constraints, the APE of a symmetrically stable flow is zero, while the energetics of a mechanically driven symmetric circulation properly reflect its causality.
Resumo:
We study two-dimensional (2D) turbulence in a doubly periodic domain driven by a monoscale-like forcing and damped by various dissipation mechanisms of the form νμ(−Δ)μ. By “monoscale-like” we mean that the forcing is applied over a finite range of wavenumbers kmin≤k≤kmax, and that the ratio of enstrophy injection η≥0 to energy injection ε≥0 is bounded by kmin2ε≤η≤kmax2ε. Such a forcing is frequently considered in theoretical and numerical studies of 2D turbulence. It is shown that for μ≥0 the asymptotic behaviour satisfies ∥u∥12≤kmax2∥u∥2, where ∥u∥2 and ∥u∥12 are the energy and enstrophy, respectively. If the condition of monoscale-like forcing holds only in a time-mean sense, then the inequality holds in the time mean. It is also shown that for Navier–Stokes turbulence (μ=1), the time-mean enstrophy dissipation rate is bounded from above by 2ν1kmax2. These results place strong constraints on the spectral distribution of energy and enstrophy and of their dissipation, and thereby on the existence of energy and enstrophy cascades, in such systems. In particular, the classical dual cascade picture is shown to be invalid for forced 2D Navier–Stokes turbulence (μ=1) when it is forced in this manner. Inclusion of Ekman drag (μ=0) along with molecular viscosity permits a dual cascade, but is incompatible with the log-modified −3 power law for the energy spectrum in the enstrophy-cascading inertial range. In order to achieve the latter, it is necessary to invoke an inverse viscosity (μ<0). These constraints on permissible power laws apply for any spectrally localized forcing, not just for monoscale-like forcing.
Resumo:
A series of coupled atmosphere–ocean–ice aquaplanet experiments is described in which topological constraints on ocean circulation are introduced to study the role of ocean circulation on the mean climate of the coupled system. It is imagined that the earth is completely covered by an ocean of uniform depth except for the presence or absence of narrow barriers that extend from the bottom of the ocean to the sea surface. The following four configurations are described: Aqua (no land), Ridge (one barrier extends from pole to pole), Drake (one barrier extends from the North Pole to 35°S), and DDrake (two such barriers are set 90° apart and join at the North Pole, separating the ocean into a large basin and a small basin, connected to the south). On moving from Aqua to Ridge to Drake to DDrake, the energy transports in the equilibrium solutions become increasingly “realistic,” culminating in DDrake, which has an uncanny resemblance to the present climate. Remarkably, the zonal-average climates of Drake and DDrake are strikingly similar, exhibiting almost identical heat and freshwater transports, and meridional overturning circulations. However, Drake and DDrake differ dramatically in their regional climates. The small and large basins of DDrake exhibit distinctive Atlantic-like and Pacific-like characteristics, respectively: the small basin is warmer, saltier, and denser at the surface than the large basin, and is the main site of deep water formation with a deep overturning circulation and strong northward ocean heat transport. A sensitivity experiment with DDrake demonstrates that the salinity contrast between the two basins, and hence the localization of deep convection, results from a deficit of precipitation, rather than an excess of evaporation, over the small basin. It is argued that the width of the small basin relative to the zonal fetch of atmospheric precipitation is the key to understanding this salinity contrast. Finally, it is argued that many gross features of the present climate are consequences of two topological asymmetries that have profound effects on ocean circulation: a meridional asymmetry (circumpolar flow in the Southern Hemisphere; blocked flow in the Northern Hemisphere) and a zonal asymmetry (a small basin and a large basin).
Resumo:
Studying the pathogenesis of an infectious disease like colibacillosis requires an understanding of the responses of target hosts to the organism both as a pathogen and as a commensal. The mucosal immune system constitutes the primary line of defence against luminal micro-organisms. The immunoglobulin-superfamily-based adaptive immune system evolved in the earliest jawed vertebrates, and the adaptive and innate immune system of humans, mice, pigs and ruminants co-evolved in common ancestors for approximately 300 million years. The divergence occurred only 100 mya and, as a consequence, most of the fundamental immunological mechanisms are very similar. However, since pressure on the immune system comes from rapidly evolving pathogens, immune systems must also evolve rapidly to maintain the ability of the host to survive and reproduce. As a consequence, there are a number of areas of detail where mammalian immune systems have diverged markedly from each other, such that results obtained in one species are not always immediately transferable to another. Thus, animal models of specific diseases need to be selected carefully, and the results interpreted with caution. Selection is made simpler where specific host species like cattle and pigs can be both target species and reservoirs for human disease, as in infections with Escherichia coli.
Resumo:
Cross-layer techniques represent efficient means to enhance throughput and increase the transmission reliability of wireless communication systems. In this paper, a cross-layer design of aggressive adaptive modulation and coding (A-AMC), truncated automatic repeat request (T-ARQ), and user scheduling is proposed for multiuser multiple-input-multiple-output (MIMO) maximal ratio combining (MRC) systems, where the impacts of feedback delay (FD) and limited feedback (LF) on channel state information (CSI) are also considered. The A-AMC and T-ARQ mechanism selects the appropriate modulation and coding schemes (MCSs) to achieve higher spectral efficiency while satisfying the service requirement on the packet loss rate (PLR), profiting from the feasibility of using different MCSs to retransmit a packet, which is destined to a scheduled user selected to exploit multiuser diversity and enhance the system's performance in terms of both transmission efficiency and fairness. The system's performance is evaluated in terms of the average PLR, average spectral efficiency (ASE), outage probability, and average packet delay, which are derived in closed form, considering transmissions over Rayleigh-fading channels. Numerical results and comparisons are provided and show that A-AMC combined with T-ARQ yields higher spectral efficiency than the conventional scheme based on adaptive modulation and coding (AMC), while keeping the achieved PLR closer to the system's requirement and reducing delay. Furthermore, the effects of the number of ARQ retransmissions, numbers of transmit and receive antennas, normalized FD, and cardinality of the beamforming weight vector codebook are studied and discussed.
Resumo:
The Code for Sustainable Homes (the Code) will require new homes in the United Kingdom to be ‘zero carbon’ from 2016. Drawing upon an evolutionary innovation perspective, this paper contributes to a gap in the literature by investigating which low and zero carbon technologies are actually being used by house builders, rather than the prevailing emphasis on the potentiality of these technologies. Using the results from a questionnaire three empirical contributions are made. First, house builders are selecting a narrow range of technologies. Second, these choices are made to minimise the disruption to their standard design and production templates (SDPTs). Finally, the coalescence around a small group of technologies is expected to intensify with solar-based technologies predicted to become more important. This paper challenges the dominant technical rationality in the literature that technical efficiency and cost benefits are the primary drivers for technology selection. These drivers play an important role but one which is mediated by the logic of maintaining the SDPTs of the house builders. This emphasises the need for construction diffusion of innovation theory to be problematized and developed within the context of business and market regimes constrained and reproduced by resilient technological trajectories.
Resumo:
Lexical compounds in English are constrained in that the non-head noun can be an irregular but not a regular plural (e.g. mice eater vs. *rats eater), a contrast that has been argued to derive from a morphological constraint on modifiers inside compounds. In addition, bare nouns are preferred over plural forms inside compounds (e.g. mouse eater vs. mice eater), a contrast that has been ascribed to the semantics of compounds. Measuring eyemovements during reading, this study examined how morphological and semantic information become available over time during the processing of a compound. We found that the morphological constraint affected both early and late eye-movement measures, whereas the semantic constraint for singular non-heads only affected late measures of processing. These results indicate that morphological information becomes available earlier than semantic information during the processing of compounds.
Resumo:
The avoidance of regular but not irregular plurals inside compounds (e.g. *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.