939 resultados para within-host modelling
Resumo:
This article reviews the experiences of a practising business consultancy division. It discusses the reasons for the failure of the traditional, expert consultancy approach and states the requirements for a more suitable consultancy methodology. An approach called ‘Modelling as Learning’ is introduced, its three defining aspects being: client ownership of all analytical work performed, consultant acting as facilitator and sensitivity to soft issues within and surrounding a problem. The goal of such an approach is set as the acceleration of the client's learning about the business. The tools that are used within this methodological framework are discussed and some case studies of the methodology are presented. It is argued that a learning experience was necessary before arriving at the new methodology but that it is now a valuable and significant component of the division's work.
Resumo:
This paper outlines the results of a programme of radiocarbon dating and Bayesian modelling relating to an Early Bronze Age barrow cemetery at Over, Cambridgeshire. In total, 43 dates were obtained, enabling the first high-resolution independent chronology (relating to both burial and architectural events) to be constructed for a site of this kind. The results suggest that the three main turf-mound barrows were probably constructed and used successively rather than simultaneously, that the shift from inhumation to cremation seen on the site was not a straightforward progression, and that the four main ‘types’ of cremation burial in evidence were used throughout the life of the site. Overall, variability in terms of burial practice appears to have been a key feature of the site. The paper also considers the light that the fine-grained chronology developed can shed on recent much wider discussions of memory and time within Early Bronze Age barrows
Resumo:
The urban boundary layer (UBL) is the part of the atmosphere in which most of the planet’s population now lives, and is one of the most complex and least understood microclimates. Given potential climate change impacts and the requirement to develop cities sustainably, the need for sound modelling and observational tools becomes pressing. This review paper considers progress made in studies of the UBL in terms of a conceptual framework spanning microscale to mesoscale determinants of UBL structure and evolution. Considerable progress in observing and modelling the urban surface energy balance has been made. The urban roughness sub-layer is an important region requiring attention as assumptions about atmospheric turbulence break down in this layer and it may dominate coupling of the surface to the UBL due to its considerable depth. The upper 90% of the UBL (mixed and residual layers) remains under-researched but new remote sensing methods and high resolution modelling tools now permit rapid progress. Surface heterogeneity dominates from neighbourhood to regional scales and should be more strongly considered in future studies. Specific research priorities include humidity within the UBL, high-rise urban canopies and the development of long-term, spatially extensive measurement networks coupled strongly to model development.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Climate has been changing in the last fifty years in China and will continue to change regardless any efforts for mitigation. Agriculture is a climate-dependent activity and highly sensitive to climate changes and climate variability. Understanding the interactions between climate change and agricultural production is essential for society stable development of China. The first mission is to fully understand how to predict future climate and link it with agriculture production system. In this paper, recent studies both domestic and international are reviewed in order to provide an overall image of the progress in climate change researches. The methods for climate change scenarios construction are introduced. The pivotal techniques linking crop model and climate models are systematically assessed and climate change impacts on Chinese crops yield among model results are summarized. The study found that simulated productions of grain crop inherit uncertainty from using different climate models, emission scenarios and the crops simulation models. Moreover, studies have different spatial resolutions, and methods for general circulation model (GCM) downscaling which increase the uncertainty for regional impacts assessment. However, the magnitude of change in crop production due to climate change (at 700 ppm CO2 eq correct) appears within ±10% for China in these assessments. In most literatures, the three cereal crop yields showed decline under climate change scenarios and only wheat in some region showed increase. Finally, the paper points out several gaps in current researches which need more studies to shorten the distance for objective recognizing the impacts of climate change on crops. The uncertainty for crop yield projection is associated with climate change scenarios, CO2 fertilization effects and adaptation options. Therefore, more studies on the fields such as free air CO2 enrichment experiment and practical adaptations implemented need to be carried out
Resumo:
Aeolian dust modelling has improved significantly over the last ten years and many institutions now consistently model dust uplift, transport and deposition in general circulation models (GCMs). However, the representation of dust in GCMs is highly variable between modelling communities due to differences in the uplift schemes employed and the representation of the global circulation that subsequently leads to dust deflation. In this study two different uplift schemes are incorporated in the same GCM. This approach enables a clearer comparison of the dust uplift schemes themselves, without the added complexity of several different transport and deposition models. The global annual mean dust aerosol optical depths (at 550 nm) using two different dust uplift schemes were found to be 0.014 and 0.023—both lying within the estimates from the AeroCom project. However, the models also have appreciably different representations of the dust size distribution adjacent to the West African coast and very different deposition at various sites throughout the globe. The different dust uplift schemes were also capable of influencing the modelled circulation, surface air temperature, and precipitation despite the use of prescribed sea surface temperatures. This has important implications for the use of dust models in AMIP-style (Atmospheric Modelling Intercomparison Project) simulations and Earth-system modelling.
Resumo:
Nutrient enrichment and drought conditions are major threats to lowland rivers causing ecosystem degradation and composition changes in plant communities. The controls on primary producer composition in chalk rivers are investigated using a new model and existing data from the River Frome (UK) to explore abiotic and biotic interactions. The growth and interaction of four primary producer functional groups (suspended algae, macrophytes, epiphytes, sediment biofilm) were successfully linked with flow, nutrients (N, P), light and water temperature such that the modelled biomass dynamics of the four groups matched that of the observed. Simulated growth of suspended algae was limited mainly by the residence time of the river rather than in-stream phosphorus concentrations. The simulated growth of the fixed vegetation (macrophytes, epiphytes, sediment biofilm) was overwhelmingly controlled by incoming solar radiation and light attenuation in the water column. Nutrients and grazing have little control when compared to the other physical controls in the simulations. A number of environmental threshold values were identified in the model simulations for the different producer types. The simulation results highlighted the importance of the pelagic–benthic interactions within the River Frome and indicated that process interaction defined the behaviour of the primary producers, rather than a single, dominant driver. The model simulations pose interesting questions to be considered in the next iteration of field- and laboratory based studies.
Resumo:
Climatic and land use changes have significant consequences for the distribution of tree species, both through natural dispersal processes and following management prescriptions. Responses to these changes will be expressed most strongly in seedlings near current species range boundaries. In northern temperate forest ecosystems, where changes are already being observed, ectomycorrhizal fungi contribute significantly to successful tree establishment. We hypothesised that communities of fungal symbionts might therefore play a role in facilitating, or limiting, host seedling range expansion. To test this hypothesis, ectomycorrhizal communities of interior Douglas-fir and interior lodgepole pine seedlings were analysed in a common greenhouse environment following growth in five soils collected along an ecosystem gradient. Currently, Douglas-fir’s natural distribution encompasses three of the five soils, whereas lodgepole pine’s extends much further north. Host filtering was evident amongst the 29 fungal species encountered: 7 were shared, 9 exclusive to Douglas-fir and 13 exclusive to lodgepole pine. Seedlings of both host species formed symbioses with each soil fungal community, thus Douglas-fir did so even where those soils came from outside its current distribution. However, these latter communities displayed significant taxonomic and functional differences to those found within the host distribution, indicative of habitat filtering. In contrast, lodgepole pine fungal communities displayed high functional similarity across the soil gradient. Taxonomic and/or functional shifts in Douglas-fir fungal communities may prove ecologically significant during the predicted northward migration of this species; especially in combination with changes in climate and management operations, such as seed transfer across geographical regions for forestry purposes.
Resumo:
The personalised conditioning system (PCS) is widely studied. Potentially, it is able to reduce energy consumption while securing occupants’ thermal comfort requirements. It has been suggested that automatic optimised operation schemes for PCS should be introduced to avoid energy wastage and discomfort caused by inappropriate operation. In certain automatic operation schemes, personalised thermal sensation models are applied as key components to help in setting targets for PCS operation. In this research, a novel personal thermal sensation modelling method based on the C-Support Vector Classification (C-SVC) algorithm has been developed for PCS control. The personal thermal sensation modelling has been regarded as a classification problem. During the modelling process, the method ‘learns’ an occupant’s thermal preferences from his/her feedback, environmental parameters and personal physiological and behavioural factors. The modelling method has been verified by comparing the actual thermal sensation vote (TSV) with the modelled one based on 20 individual cases. Furthermore, the accuracy of each individual thermal sensation model has been compared with the outcomes of the PMV model. The results indicate that the modelling method presented in this paper is an effective tool to model personal thermal sensations and could be integrated within the PCS for optimised system operation and control.
Resumo:
Molecular hydrogen emission is commonly observed in planetary nebulae. Images taken in infrared H(2) emission lines show that at least part of the molecular emission is produced inside the ionized region. In the best studied case, the Helix nebula, the H(2) emission is produced inside cometary knots (CKs), comet-shaped structures believed to be clumps of dense neutral gas embedded within the ionized gas. Most of the H(2) emission of the CKs seems to be produced in a thin layer between the ionized diffuse gas and the neutral material of the knot, in a mini-photodissociation region (mini-PDR). However, PDR models published so far cannot fully explain all the characteristics of the H(2) emission of the CKs. In this work, we use the photoionization code AANGABA to study the H(2) emission of the CKs, particularly that produced in the interface H(+)/H(0) of the knot, where a significant fraction of the H(2) 1-0 S(1) emission seems to be produced. Our results show that the production of molecular hydrogen in such a region may explain several characteristics of the observed emission, particularly the high excitation temperature of the H(2) infrared lines. We find that the temperature derived from H(2) observations, even of a single knot, will depend very strongly on the observed transitions, with much higher temperatures derived from excited levels. We also proposed that the separation between the H alpha and [N II] peak emission observed in the images of CKs may be an effect of the distance of the knot from the star, since for knots farther from the central star the [N II] line is produced closer to the border of the CK than H alpha.
Resumo:
IP(3)-dependent Ca(2+) signaling controls a myriad of cellular processes in higher eukaryotes and similar signaling pathways are evolutionarily conserved in Plasmodium, the intracellular parasite that causes malaria. We have reported that isolated, permeabilized Plasmodium chabaudi, releases Ca(2+) upon addition of exogenous IP(3). In the present study, we investigated whether the IP(3) signaling pathway operates in intact Plasmodium falciparum, the major disease-causing human malaria parasite. P. falciparum-infected red blood cells (RBCs) in the trophozoite stage were simultaneously loaded with the Ca(2+) indicator Fluo-4/AM and caged-IP(3). Photolytic release of IP(3) elicited a transient Ca(2+) increase in the cytosol of the intact parasite within the RBC. The intracellular Ca(2+) pools of the parasite were selectively discharged, using thapsigargin to deplete endoplasmic reticulum (ER) Ca(2+) and the antimalarial chloroquine to deplete Ca(2+) from acidocalcisomes. These data show that the ER is the major IP(3)-sensitive Ca(2+) store. Previous work has shown that the human host hormone melatonin regulates P. falciparum cell cycle via a Ca(2+)-dependent pathway. In the present study, we demonstrate that melatonin increases inositol-polyphosphate production in intact intraerythrocytic parasite. Moreover, the Ca(2+) responses to melatonin and uncaging of IP(3) were mutually exclusive in infected RBCs. Taken together these data provide evidence that melatonin activates PLC to generate IP(3) and open ER-localized IP(3)-sensitive Ca(2+) channels in P. falciparum. This receptor signaling pathway is likely to be involved in the regulation and synchronization of parasite cell cycle progression.
Resumo:
Leiopelma hochstetteri is an endangered New Zealand frog now confined to isolated populations scattered across the North Island. A better understanding of its past, current and predicted future environmental suitability will contribute to its conservation which is in jeopardy due to human activities, feral predators, disease and climate change. Here we use ecological niche modelling with all known occurrence data (N = 1708) and six determinant environmental variables to elucidate current, pre-human and future environmental suitability of this species. Comparison among independent runs, subfossil records and a clamping method allow validation of models. Many areas identified as currently suitable do not host any known populations. This apparent discrepancy could be explained by several non exclusive hypotheses: the areas have not been adequately surveyed and undiscovered populations still remain, the model is over simplistic; the species` sensitivity to fragmentation and small population size; biotic interactions; historical events. An additional outcome is that apparently suitable, but frog-less areas could be targeted for future translocations. Surprisingly, pre-human conditions do not differ markedly highlighting the possibility that the range of the species was broadly fragmented before human arrival. Nevertheless, some populations, particularly on the west of the North Island may have disappeared as a result of human mediated habitat modification. Future conditions are marked with higher temperatures, which are predicted to be favourable to the species. However, such virtual gain in suitable range will probably not benefit the species given the highly fragmented nature of existing habitat and the low dispersal ability of this species. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Ghana faces a macroeconomic problem of inflation for a long period of time. The problem in somehow slows the economic growth in this country. As we all know, inflation is one of the major economic challenges facing most countries in the world especially those in African including Ghana. Therefore, forecasting inflation rates in Ghana becomes very important for its government to design economic strategies or effective monetary policies to combat any unexpected high inflation in this country. This paper studies seasonal autoregressive integrated moving average model to forecast inflation rates in Ghana. Using monthly inflation data from July 1991 to December 2009, we find that ARIMA (1,1,1)(0,0,1)12 can represent the data behavior of inflation rate in Ghana well. Based on the selected model, we forecast seven (7) months inflation rates of Ghana outside the sample period (i.e. from January 2010 to July 2010). The observed inflation rate from January to April which was published by Ghana Statistical Service Department fall within the 95% confidence interval obtained from the designed model. The forecasted results show a decreasing pattern and a turning point of Ghana inflation in the month of July.
Resumo:
The aim of this paper is to develop a flexible model for analysis of quantitative trait loci (QTL) in outbred line crosses, which includes both additive and dominance effects. Our flexible intercross analysis (FIA) model accounts for QTL that are not fixed within founder lines and is based on the variance component framework. Genome scans with FIA are performed using a score statistic, which does not require variance component estimation. RESULTS: Simulations of a pedigree with 800 F2 individuals showed that the power of FIA including both additive and dominance effects was almost 50% for a QTL with equal allele frequencies in both lines with complete dominance and a moderate effect, whereas the power of a traditional regression model was equal to the chosen significance value of 5%. The power of FIA without dominance effects included in the model was close to those obtained for FIA with dominance for all simulated cases except for QTL with overdominant effects. A genome-wide linkage analysis of experimental data from an F2 intercross between Red Jungle Fowl and White Leghorn was performed with both additive and dominance effects included in FIA. The score values for chicken body weight at 200 days of age were similar to those obtained in FIA analysis without dominance. CONCLUSION: We have extended FIA to include QTL dominance effects. The power of FIA was superior, or similar, to standard regression methods for QTL effects with dominance. The difference in power for FIA with or without dominance is expected to be small as long as the QTL effects are not overdominant. We suggest that FIA with only additive effects should be the standard model to be used, especially since it is more computationally efficient.
Resumo:
Many solutions to AI problems require the task to be represented in one of a multitude of rigorous mathematical formalisms. The construction of such mathematical models forms a difficult problem which is often left to the user of the problem solver. This void between problem solvers and the problems is studied by the eclectic field of automated modelling. Within this field, compositional modelling, a knowledge-based methodology for system modelling, has established itself as a leading approach. In general, a compositional modeller organises knowledge in a structure of composable fragments that relate to particular system components or processes. Its embedded inference mechanism chooses the appropriate fragments with respect to a given problem, instantiates and assembles them into a consistent system model. Many different types of compositional modeller exist, however, with significant differences in their knowledge representation and approach to inference. This paper examines compositional modelling. It presents a general framework for building and analysing compositional modellers. Based on this framework, a number of influential compositional modellers are examined and compared. The paper also identifies the strengths and weaknesses of compositional modelling and discusses some typical applications.