218 resultados para switching models
Resumo:
Division of labor in social insects is determinant to their ecological success. Recent models emphasize that division of labor is an emergent property of the interactions among nestmates obeying to simple behavioral rules. However, the role of evolution in shaping these rules has been largely neglected. Here, we investigate a model that integrates the perspectives of self-organization and evolution. Our point of departure is the response threshold model, where we allow thresholds to evolve. We ask whether the thresholds will evolve to a state where division of labor emerges in a form that fits the needs of the colony. We find that division of labor can indeed evolve through the evolutionary branching of thresholds, leading to workers that differ in their tendency to take on a given task. However, the conditions under which division of labor evolves depend on the strength of selection on the two fitness components considered: amount of work performed and on worker distribution over tasks. When selection is strongest on the amount of work performed, division of labor evolves if switching tasks is costly. When selection is strongest on worker distribution, division of labor is less likely to evolve. Furthermore, we show that a biased distribution (like 3:1) of workers over tasks is not easily achievable by a threshold mechanism, even under strong selection. Contrary to expectation, multiple matings of colony foundresses impede the evolution of specialization. Overall, our model sheds light on the importance of considering the interaction between specific mechanisms and ecological requirements to better understand the evolutionary scenarios that lead to division of labor in complex systems. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00265-012-1343-2) contains supplementary material, which is available to authorized users.
Resumo:
Recent advances in remote sensing technologies have facilitated the generation of very high resolution (VHR) environmental data. Exploratory studies suggested that, if used in species distribution models (SDMs), these data should enable modelling species' micro-habitats and allow improving predictions for fine-scale biodiversity management. In the present study, we tested the influence, in SDMs, of predictors derived from a VHR digital elevation model (DEM) by comparing the predictive power of models for 239 plant species and their assemblages fitted at six different resolutions in the Swiss Alps. We also tested whether changes of the model quality for a species is related to its functional and ecological characteristics. Refining the resolution only contributed to slight improvement of the models for more than half of the examined species, with the best results obtained at 5 m, but no significant improvement was observed, on average, across all species. Contrary to our expectations, we could not consistently correlate the changes in model performance with species characteristics such as vegetation height. Temperature, the most important variable in the SDMs across the different resolutions, did not contribute any substantial improvement. Our results suggest that improving resolution of topographic data only is not sufficient to improve SDM predictions - and therefore local management - compared to previously used resolutions (here 25 and 100 m). More effort should be dedicated now to conduct finer-scale in-situ environmental measurements (e.g. for temperature, moisture, snow) to obtain improved environmental measurements for fine-scale species mapping and management.
Resumo:
Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models
Resumo:
Hsp70s are conserved molecular chaperones that can prevent protein aggregation, actively unfold, solubilize aggregates, pull translocating proteins across membranes and remodel native proteins complexes. Disparate mechanisms have been proposed for the various modes of Hsp70 action: passive prevention of aggregation by kinetic partitioning, peptide-bond isomerase, Brownian ratcheting or active power-stroke pulling. Recently, we put forward a unifying mechanism named 'entropic pulling', which proposed that Hsp70 uses the energy of ATP hydrolysis to recruit a force of entropic origin to locally unfold aggregates or pull proteins across membranes. The entropic pulling mechanism reproduces the expected phenomenology that inspired the other disparate mechanisms and is, moreover, simple.
Resumo:
Research projects aimed at proposing fingerprint statistical models based on the likelihood ratio framework have shown that low quality finger impressions left on crime scenes may have significant evidential value. These impressions are currently either not recovered, considered to be of no value when first analyzed by fingerprint examiners, or lead to inconclusive results when compared to control prints. There are growing concerns within the fingerprint community that recovering and examining these low quality impressions will result in a significant increase of the workload of fingerprint units and ultimately of the number of backlogged cases. This study was designed to measure the number of impressions currently not recovered or not considered for examination, and to assess the usefulness of these impressions in terms of the number of additional detections that would result from their examination.
Resumo:
PURPOSE OF REVIEW: HIV targets primary CD4(+) T cells. The virus depends on the physiological state of its target cells for efficient replication, and, in turn, viral infection perturbs the cellular state significantly. Identifying the virus-host interactions that drive these dynamic changes is important for a better understanding of viral pathogenesis and persistence. The present review focuses on experimental and computational approaches to study the dynamics of viral replication and latency. RECENT FINDINGS: It was recently shown that only a fraction of the inducible latently infected reservoirs are successfully induced upon stimulation in ex-vivo models while additional rounds of stimulation make allowance for reactivation of more latently infected cells. This highlights the potential role of treatment duration and timing as important factors for successful reactivation of latently infected cells. The dynamics of HIV productive infection and latency have been investigated using transcriptome and proteome data. The cellular activation state has shown to be a major determinant of viral reactivation success. Mathematical models of latency have been used to explore the dynamics of the latent viral reservoir decay. SUMMARY: Timing is an important component of biological interactions. Temporal analyses covering aspects of viral life cycle are essential for gathering a comprehensive picture of HIV interaction with the host cell and untangling the complexity of latency. Understanding the dynamic changes tipping the balance between success and failure of HIV particle production might be key to eradicate the viral reservoir.
Resumo:
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
Resumo:
This study investigated behavioural and electro-cortical reorganizations accompanying intentional switching between two distinct bimanual coordination tapping modes (In-phase and Anti-phase) that differ in stability when produced at the same movement rate. We expected that switching to a less stable tapping mode (In-to-Anti switching) would lead to larger behavioural perturbations and require supplementary neural resources than switching to a more stable tapping mode (Anti-to-In switching). Behavioural results confirmed that the In-to-Anti switching lasted longer than the Anti-to-In switching. A general increase in attention-related neural activity was found at the moment of switching for both conditions. Additionally, two condition-dependent EEG reorganizations were observed. First, a specific increase in cortico-cortical coherence appeared exclusively during the In-to-Anti switching. This result may reflect a strengthening in inter-regional communication in order to engage in the subsequent, less stable, tapping mode. Second, a decrease in motor-related neural activity (increased beta spectral power) was found for the Anti-to-In switching only. The latter effect may reflect the interruption of the previous, less stable, tapping mode. Given that previous results on spontaneous Anti-to-In switching revealing an inverse pattern of EEG reorganization (decreased beta spectral power), present findings give new insight on the stability-dependent neural correlates of intentional motor switching. © 2010 Elsevier Ireland Ltd. All rights reserved
Resumo:
It has been repeatedly debated which strategies people rely on in inference. These debates have been difficult to resolve, partially because hypotheses about the decision processes assumed by these strategies have typically been formulated qualitatively, making it hard to test precise quantitative predictions about response times and other behavioral data. One way to increase the precision of strategies is to implement them in cognitive architectures such as ACT-R. Often, however, a given strategy can be implemented in several ways, with each implementation yielding different behavioral predictions. We present and report a study with an experimental paradigm that can help to identify the correct implementations of classic compensatory and non-compensatory strategies such as the take-the-best and tallying heuristics, and the weighted-linear model.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.