984 resultados para Long-term-memory
Resumo:
Background: The effects of landscape modifications on the long-term persistence of wild animal populations is of crucial importance to wildlife managers and conservation biologists, but obtaining experimental evidence using real landscapes is usually impossible. To circumvent this problem we used individual-based models (IBMs) of interacting animals in experimental modifications of a real Danish landscape. The models incorporate as much as possible of the behaviour and ecology of four species with contrasting life-history characteristics: skylark (Alauda arvensis), vole (Microtus agrestis), a ground beetle (Bembidion lampros) and a linyphiid spider (Erigone atra). This allows us to quantify the population implications of experimental modifications of landscape configuration and composition. Methodology/Principal Findings: Starting with a real agricultural landscape, we progressively reduced landscape complexity by (i) homogenizing habitat patch shapes, (ii) randomizing the locations of the patches, and (iii) randomizing the size of the patches. The first two steps increased landscape fragmentation. We assessed the effects of these manipulations on the long-term persistence of animal populations by measuring equilibrium population sizes and time to recovery after disturbance. Patch rearrangement and the presence of corridors had a large effect on the population dynamics of species whose local success depends on the surrounding terrain. Landscape modifications that reduced population sizes increased recovery times in the short-dispersing species, making small populations vulnerable to increasing disturbance. The species that were most strongly affected by large disturbances fluctuated little in population sizes in years when no perturbations took place. Significance: Traditional approaches to the management and conservation of populations use either classical methods of population analysis, which fail to adequately account for the spatial configurations of landscapes, or landscape ecology, which accounts for landscape structure but has difficulty predicting the dynamics of populations living in them. Here we show how realistic and replicable individual-based models can bridge the gap between non-spatial population theory and non-dynamic landscape ecology. A major strength of the approach is its ability to identify population vulnerabilities not detected by standard population viability analyses.
Resumo:
This paper examines two hydrochemical time-series derived from stream samples taken in the Upper Hafren catchment, Plynlimon, Wales. One time-series comprises data collected at 7-hour intervals over 22 months (Neal et al., submitted, this issue), while the other is based on weekly sampling over 20 years. A subset of determinands: aluminium, calcium, chloride, conductivity, dissolved organic carbon, iron, nitrate, pH, silicon and sulphate are examined within a framework of non-stationary time-series analysis to identify determinand trends, seasonality and short-term dynamics. The results demonstrate that both long-term and high-frequency monitoring provide valuable and unique insights into the hydrochemistry of a catchment. The long-term data allowed analysis of long-termtrends, demonstrating continued increases in DOC concentrations accompanied by declining SO4 concentrations within the stream, and provided new insights into the changing amplitude and phase of the seasonality of the determinands such as DOC and Al. Additionally, these data proved invaluable for placing the short-term variability demonstrated within the high-frequency data within context. The 7-hour data highlighted complex diurnal cycles for NO3, Ca and Fe with cycles displaying changes in phase and amplitude on a seasonal basis. The high-frequency data also demonstrated the need to consider the impact that the time of sample collection can have on the summary statistics of the data and also that sampling during the hours of darkness provides additional hydrochemical information for determinands which exhibit pronounced diurnal variability. Moving forward, this research demonstrates the need for both long-term and high-frequency monitoring to facilitate a full and accurate understanding of catchment hydrochemical dynamics.
Resumo:
Cognitive control mechanisms—such as inhibition—decrease the likelihood that goal-directed activity is ceded to irrelevant events. Here, we use the action of auditory distraction to show how retrieval from episodic long-term memory is affected by competitor inhibition. Typically, a sequence of to-be-ignored spoken distracters drawn from the same semantic category as a list of visually-presented to-be-recalled items impairs free recall performance. In line with competitor inhibition theory (Anderson, 2003), free recall was worse for items on a probe trial if they were a repeat of distracter items presented during the previous, prime, trial (Experiment 1). This effect was only produced when the distracters were dominant members of the same category as the to-be-recalled items on the prime. For prime trials in which distracters were low-dominant members of the to-be-remembered item category or were unrelated to that category—and hence not strong competitors for retrieval—positive priming was found (Experiments 2 & 3). These results are discussed in terms of inhibitory approaches to negative priming and memory retrieval.
Resumo:
The construction sector is often described as lagging behind other major industries. At first this appears fair when considering the concept of corporate social responsibility (CSR). It is argued that CSR is ill-defined, with firms struggling to make sense of and engage with it. Literature suggests that the short-termism view of construction firms renders the long-term, triple-bottom-line principle of CSR untenable. This seems to be borne out by literature indicating that construction firms typically adopt a compliance-based approach to CSR instead of discretionary CSR which is regarded as adding most value to firms and benefiting the broadest group of stakeholders. However, this research conducted in the UK using a regional construction firm offers a counter argument whereby discretionary CSR approaches are well embedded and enacted within the firms’ business operations even though they are not formally articulated as CSR strategies and thus remain 'hidden'. This raises questions in the current CSR debate. First, is ‘hidden’ CSR relevant to the long term success of construction firms? and to what extent do these firms need to reinvent themselves to formally take advantage of the CSR agenda?
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
Working memory (WM) is not a unitary construct. There are distinct processes involved in encoding information, maintaining it on-line, and using it to guide responses. The anatomical configurations of these processes are more accurately analyzed as functionally connected networks than collections of individual regions. In the current study we analyzed event-related functional magnetic resonance imaging (fMRI) data from a Sternberg Item Recognition Paradigm WM task using a multivariate analysis method that allowed the linking of functional networks to temporally-separated WM epochs. The length of the delay epochs was varied to optimize isolation of the hemodynamic response (HDR) for each task epoch. All extracted functional networks displayed statistically significant sensitivity to delay length. Novel information extracted from these networks that was not apparent in the univariate analysis of these data included involvement of the hippocampus in encoding/probe, and decreases in BOLD signal in the superior temporal gyrus (STG), along with default-mode regions, during encoding/delay. The bilateral hippocampal activity during encoding/delay fits with theoretical models of WM in which memoranda held across the short term are activated long-term memory representations. The BOLD signal decreases in the STG were unexpected, and may reflect repetition suppression effects invoked by internal repetition of letter stimuli. Thus, analysis methods focusing on how network dynamics relate to experimental conditions allowed extraction of novel information not apparent in univariate analyses, and are particularly recommended for WM experiments for which task epochs cannot be randomized.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
Nanoscience and technology (NST) are widely cited to be the defining technology for the 21st century. In recent years, the debate surrounding NST has become increasingly public, with much of this interest stemming from two radically opposing long-term visions of a NST-enabled future: ‘nano-optimism’ and ‘nano-pessimism’. This paper demonstrates that NST is a complex and wide-ranging discipline, the future of which is characterised by uncertainty. It argues that consideration of the present-day issues surrounding NST is essential if the public debate is to move forwards. In particular, the social constitution of an emerging technology is crucial if any meaningful discussion surrounding costs and benefits is to be realised. An exploration of the social constitution of NST raises a number of issues, of which unintended consequences and the interests of those who own and control new technologies are highlighted.
Resumo:
We model the thermal evolution of a subsurface ocean of aqueous ammonium sulfate inside Titan using a parameterized convection scheme. The cooling and crystallization of such an ocean depends on its heat flux balance, and is governed by the pressure-dependent melting temperatures at the top and bottom of the ocean. Using recent observations and previous experimental data, we present a nominal model which predicts the thickness of the ocean throughout the evolution of Titan; after 4.5 Ga we expect an aqueous ammonium sulfate ocean 56 km thick, overlain by a thick (176 km) heterogeneous crust of methane clathrate, ice I and ammonium sulfate. Underplating of the crust by ice I will give rise to compositional diapirs that are capable of rising through the crust and providing a mechanism for cryovolcanism at the surface. We have conducted a parameter space survey to account for possible variations in the nominal model, and find that for a wide range of plausible conditions, an ocean of aqueous ammonium sulfate can survive to the present day, which is consistent with the recent observations of Titan's spin state from Cassini radar data [Lorenz, R.D., Stiles, B.W., Kirk, R.L., Allison, M.D., del Marmo, P.P., Iess, L., Lunine, J.I., Ostro, S.J., Hensley, S., 2008. Science 319, 1649–1651].
Resumo:
In the European Union, first-tier assessment of the long-term risk to birds and mammals from pesticides is based on calculation of a deterministic long-term toxicity/exposure ratio(TERlt). The ratio is developed from generic herbivores and insectivores and applied to all species. This paper describes two case studies that implement proposed improvements to the way long-term risk is assessed. These refined methods require calculation of a TER for each of five identified phases of reproduction (phase-specific TERs) and use of adjusted No Observed Effect Levels (NOELs)to incorporate variation in species sensitivity to pesticides. They also involve progressive refinement of the exposure estimate so that it applies to particular species, rather than generic indicators, and relates spraying date to onset of reproduction. The effect of using these new methods on the assessment of risk is described. Each refinement did not necessarily alter the calculated TER value in a way that was either predictable or consistent across both case studies. However, use of adjusted NOELs always reduced TERs, and relating spraying date to onset of reproduction increased most phase-specific TERs. The case studies suggested that the current first-tier TERlt assessment may underestimate risk in some circumstances and that phase-specific assessments can help identify appropriate risk-reduction measures. The way in which deterministic phase-specific assessments can currently be implemented to enhance first-tier assessment is outlined.
Resumo:
Abstract: Long-term exposure of skylarks to a fictitious insecticide and of wood mice to a fictitious fungicide were modelled probabilistically in a Monte Carlo simulation. Within the same simulation the consequences of exposure to pesticides on reproductive success were modelled using the toxicity-exposure-linking rules developed by R.S. Bennet et al. (2005) and the interspecies extrapolation factors suggested by R. Luttik et al.(2005). We built models to reflect a range of scenarios and as a result were able to show how exposure to pesticide might alter the number of individuals engaged in any given phase of the breeding cycle at any given time and predict the numbers of new adults at the season’s end.
Resumo:
Dispersal is a key process in population and evolutionary ecology. Individual decisions are affected by fitness consequences of dispersal, but these are difficult to measure in wild populations. A long-term dataset on a geographically closed bird population, the Mauritius kestrel, offers a rare opportunity to explore fitness consequences. Females dispersed further when the availability of local breeding sites was limited, whereas male dispersal correlated with phenotypic traits. Female but not male fitness was lower when they dispersed longer distances compared to settling close to home. These results suggest a cost of dispersal in females. We found evidence of both short- and long-term fitness consequences of natal dispersal in females, including reduced fecundity in early life and more rapid aging in later life. Taken together, our results indicate that dispersal in early life might shape life history strategies in wild populations.
Resumo:
The incidence of breast cancer has risen worldwide to unprecedented levels in recent decades, making it now the major cancer of women in many parts of the world.1 Although diet, alcohol, radiation and inherited loss of BRCA1/2 genes have all been associated with increased incidence, the main identified risk factors are life exposure to hormones including physiological variations associated with puberty/pregnancy/menopause,1 personal choice of use of hormonal contraceptives2 and/or hormone replacement therapy.3–6 On this basis, exposure of the human breast to the many environmental pollutant chemicals capable of mimicking or interfering with oestrogen action7 should also be of concern.8 Hundreds of such environmental chemicals have now been measured in human breast tissue from a range of dietary and domestic exposure sources7 ,9 including persistent organochlorine pollutants (POPs),10 polybrominated diphenylethers and polybromobiphenyls,11 polychlorinated biphenyls,12 dioxins,13 alkyl phenols,14 bisphenol-A and chlorinated derivatives,15 as well as other less lipophilic compounds such as parabens (alkyl esters of p-hydroxybenzoic acid),16 but studies investigating any association between raised levels of such compounds and the development of breast cancer remain inconclusive.7–16 However, the functionality of these chemicals has continued to be assessed on the basis of individual chemicals rather than the environmental reality of long-term low-dose exposure to complex mixtures. This misses the potential for individuals to have high concentrations of different compounds but with a common mechanism of action. It also misses the complex interactions between chemicals and physiological hormones which together may act to alter the internal homeostasis of the oestrogenic environment of mammary tissue.
Resumo:
Purpose of review: Vascular function is recognized as an early and integrative marker of cardiovascular disease. While there is consistent evidence that the quantity of dietary fat has significant effects on vascular function, the differential effects of individual fatty acids is less clear. This review summarizes recent evidence from randomly controlled dietary studies on the impact of dietary fatty acids on vascular function, as determined by flow-mediated dilatation (FMD). Recent findings: Critical appraisal is given to five intervention studies (one acute, four chronic) which examined the impact of long-chain n-3 polyunsaturated fatty acid [eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA)] on FMD. In the acute setting, a high dose of long-chain n-3 polyunsaturated fatty acid (4.9 g per 70 kg man) improved postprandial FMD significantly, compared with a saturated fatty acid-rich meal in healthy individuals. In longer-term studies, there was limited evidence for a significant effect of EPA/DHA on FMD in diseased groups. Summary: The strongest evidence for the benefits of EPA/DHA on vascular function is in the postprandial state. More evidence from randomly controlled intervention trials with foods will be required to substantiate the long-term effects of EPA/DHA, to inform public health and clinical recommendations.