50 resultados para Multinomial logit models with random coefficients (RCL)
Resumo:
Objective: Converging evidence speak in favor of an abnormal susceptibility to oxidative stress in schizophrenia. A decreased level of glutathione (GSH), the principal non-protein antioxidant and redox regulator, was observed both in cerebrospinal-fluid and prefrontal cortex of schizophrenia patients (Do et al., 2000). Results: Schizophrenia patients have an abnormal GSH synthesis most likely of genetic origin: Two independent case-control studies showed a significant association between schizophrenia and a GAG trinucleotide repeat (TNR) polymorphism in the GSH key synthesizing enzyme glutamate-cysteine-ligase (GCL) catalytic subunit (GCLC) gene. The most common TNR genotype 7/7 was more frequent in controls, whereas the rarest TNR genotype 8/8 was three times more frequent in patients. The disease-associated genotypes correlated with a decrease in GCLC protein expression, GCL activity and GSH content. Such a redox dysregulation during development could underlie the structural and functional anomalies in connectivity: In experimental models, GSH deficit induced anomalies similar to those observed in patients. (a) morphology: In animal models with GSH deficit during the development we observed in prefrontal cortex a decreased dendritic spines density in pyramidal cells and an abnormal development of parvalbumine (but not of calretinine) immunoreactive GABA interneurones in anterior cingulate cortex. (b) physiology: GSH depletion in hippocampal slices induces NMDA receptors hypofunction and an impairment of long term potentiation. In addition, GSH deficit affected the modulation of dopamine on NMDA-induced Ca 2+ response in cultured cortical neurons. While dopamine enhanced NMDA responses in control neurons, it depressed NMDA responses in GSH-depleted neurons. Antagonist of D2-, but not D1-receptors, prevented this depression, a mechanism contributing to the efficacy of antipsychotics. The redox sensitive ryanodine receptors and L-type calcium channels underlie these observations. (c) cognition: Developing rats with low [GSH] and high dopamine lead deficit in olfactory integration and in object recognition which appears earlier in males that females, in analogy to the delay of the psychosis onset between man and woman. Conclusion: These clinical and experimental evidence, combined with the favorable outcome of a clinical trial with N-Acetyl Cysteine, a GSH precursor, on both the negative symptoms (Berk et al., submitted) and the mismatch negativity in an auditory oddball paradigm supported the proposal that a GSH synthesis impairment of genetic origin represent, among other factors, one major risk factor in schizophrenia.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
OBJECTIVE: To examine predictors of stroke recurrence in patients with a high vs a low likelihood of having an incidental patent foramen ovale (PFO) as defined by the Risk of Paradoxical Embolism (RoPE) score. METHODS: Patients in the RoPE database with cryptogenic stroke (CS) and PFO were classified as having a probable PFO-related stroke (RoPE score of >6, n = 647) and others (RoPE score of ≤6 points, n = 677). We tested 15 clinical, 5 radiologic, and 3 echocardiographic variables for associations with stroke recurrence using Cox survival models with component database as a stratification factor. An interaction with RoPE score was checked for the variables that were significant. RESULTS: Follow-up was available for 92%, 79%, and 57% at 1, 2, and 3 years. Overall, a higher recurrence risk was associated with an index TIA. For all other predictors, effects were significantly different in the 2 RoPE score categories. For the low RoPE score group, but not the high RoPE score group, older age and antiplatelet (vs warfarin) treatment predicted recurrence. Conversely, echocardiographic features (septal hypermobility and a small shunt) and a prior (clinical) stroke/TIA were significant predictors in the high but not low RoPE score group. CONCLUSION: Predictors of recurrence differ when PFO relatedness is classified by the RoPE score, suggesting that patients with CS and PFO form a heterogeneous group with different stroke mechanisms. Echocardiographic features were only associated with recurrence in the high RoPE score group.
Resumo:
BACKGROUND: Replicative phenotypic HIV resistance testing (rPRT) uses recombinant infectious virus to measure viral replication in the presence of antiretroviral drugs. Due to its high sensitivity of detection of viral minorities and its dissecting power for complex viral resistance patterns and mixed virus populations rPRT might help to improve HIV resistance diagnostics, particularly for patients with multiple drug failures. The aim was to investigate whether the addition of rPRT to genotypic resistance testing (GRT) compared to GRT alone is beneficial for obtaining a virological response in heavily pre-treated HIV-infected patients. METHODS: Patients with resistance tests between 2002 and 2006 were followed within the Swiss HIV Cohort Study (SHCS). We assessed patients' virological success after their antiretroviral therapy was switched following resistance testing. Multilevel logistic regression models with SHCS centre as a random effect were used to investigate the association between the type of resistance test and virological response (HIV-1 RNA <50 copies/mL or ≥1.5 log reduction). RESULTS: Of 1158 individuals with resistance tests 221 with GRT+rPRT and 937 with GRT were eligible for analysis. Overall virological response rates were 85.1% for GRT+rPRT and 81.4% for GRT. In the subgroup of patients with >2 previous failures, the odds ratio (OR) for virological response of GRT+rPRT compared to GRT was 1.45 (95% CI 1.00-2.09). Multivariate analyses indicate a significant improvement with GRT+rPRT compared to GRT alone (OR 1.68, 95% CI 1.31-2.15). CONCLUSIONS: In heavily pre-treated patients rPRT-based resistance information adds benefit, contributing to a higher rate of treatment success.
Resumo:
OBJECTIVES: To examine trends in the prevalence of congenital heart defects (CHDs) in Europe and to compare these trends with the recent decrease in the prevalence of CHDs in Canada (Quebec) that was attributed to the policy of mandatory folic acid fortification. STUDY DESIGN: We used data for the period 1990-2007 for 47 508 cases of CHD not associated with a chromosomal anomaly from 29 population-based European Surveillance of Congenital Anomalies registries in 16 countries covering 7.3 million births. We estimated trends for all CHDs combined and separately for 3 severity groups using random-effects Poisson regression models with splines. RESULTS: We found that the total prevalence of CHDs increased during the 1990s and the early 2000s until 2004 and decreased thereafter. We found essentially no trend in total prevalence of the most severe group (group I), whereas the prevalence of severity group II increased until about 2000 and decreased thereafter. Trends for severity group III (the most prevalent group) paralleled those for all CHDs combined. CONCLUSIONS: The prevalence of CHDs decreased in recent years in Europe in the absence of a policy for mandatory folic acid fortification. One possible explanation for this decrease may be an as-yet-undocumented increase in folic acid intake of women in Europe following recommendations for folic acid supplementation and/or voluntary fortification. However, alternative hypotheses, including reductions in risk factors of CHDs (eg, maternal smoking) and improved management of maternal chronic health conditions (eg, diabetes), must also be considered for explaining the observed decrease in the prevalence of CHDs in Europe or elsewhere.
Resumo:
The integrity of central and peripheral nervous system myelin is affected in numerous lipid metabolism disorders. This vulnerability was so far mostly attributed to the extraordinarily high level of lipid synthesis that is required for the formation of myelin, and to the relative autonomy in lipid synthesis of myelinating glial cells because of blood barriers shielding the nervous system from circulating lipids. Recent insights from analysis of inherited lipid disorders, especially those with prevailing lipid depletion and from mouse models with glia-specific disruption of lipid metabolism, shed new light on this issue. The particular lipid composition of myelin, the transport of lipid-associated myelin proteins, and the necessity for timely assembly of the myelin sheath all contribute to the observed vulnerability of myelin to perturbed lipid metabolism. Furthermore, the uptake of external lipids may also play a role in the formation of myelin membranes. In addition to an improved understanding of basic myelin biology, these data provide a foundation for future therapeutic interventions aiming at preserving glial cell integrity in metabolic disorders.
Resumo:
Understanding how communities of living organisms assemble has been a central question in ecology since the early days of the discipline. Disentangling the different processes involved in community assembly is not only interesting in itself but also crucial for an understanding of how communities will behave under future environmental scenarios. The traditional concept of assembly rules reflects the notion that species do not co-occur randomly but are restricted in their co-occurrence by interspecific competition. This concept can be redefined in a more general framework where the co-occurrence of species is a product of chance, historical patterns of speciation and migration, dispersal, abiotic environmental factors, and biotic interactions, with none of these processes being mutually exclusive. Here we present a survey and meta-analyses of 59 papers that compare observed patterns in plant communities with null models simulating random patterns of species assembly. According to the type of data under study and the different methods that are applied to detect community assembly, we distinguish four main types of approach in the published literature: species co-occurrence, niche limitation, guild proportionality and limiting similarity. Results from our meta-analyses suggest that non-random co-occurrence of plant species is not a widespread phenomenon. However, whether this finding reflects the individualistic nature of plant communities or is caused by methodological shortcomings associated with the studies considered cannot be discerned from the available metadata. We advocate that more thorough surveys be conducted using a set of standardized methods to test for the existence of assembly rules in data sets spanning larger biological and geographical scales than have been considered until now. We underpin this general advice with guidelines that should be considered in future assembly rules research. This will enable us to draw more accurate and general conclusions about the non-random aspect of assembly in plant communities.
Resumo:
1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.
Resumo:
Low socioeconomic status has been reported to be associated with head and neck cancer risk. However, previous studies have been too small to examine the associations by cancer subsite, age, sex, global region, and calendar time, and to explain the association in terms of behavioural risk factors. Individual participant data of 23,964 cases with head and neck cancer and 31,954 controls from 31 studies in 27 countries pooled with random effects models. Overall, low education was associated with an increased risk of head and neck cancer (OR = 2·50; 95%CI 2·02- 3·09). Overall one-third of the increased risk was not explained by differences in the distribution of cigarette smoking and alcohol behaviours; and it remained elevated among never users of tobacco and non-drinkers (OR = 1·61; 95%CI 1·13 - 2·31). More of the estimated education effect was not explained by cigarette smoking and alcohol behaviours: in women than in men, in older than younger groups, in the oropharynx than in other sites, in South/Central America than in Europe/North America, and was strongest in countries with greater income inequality. Similar findings were observed for the estimated effect of low vs high household income. The lowest levels of income and educational attainment were associated with more than 2-fold increased risk of head and neck cancer, which is not entirely explained by differences in the distributions of behavioural risk factors for these cancers, and which varies across cancer sites, sexes, countries, and country income inequality levels. © 2014 Wiley Periodicals, Inc.
Resumo:
ABSTRACT: INTRODUCTION: Lipoprotein-associated phospholipase A2 (Lp-PLA2) is a circulating enzyme with pro-inflammatory and oxidative activities associated with cardiovascular disease and ischemic stroke. While high plasma Lp-PLA2 activity was reported as a risk factor for dementia in the Rotterdam study, no association between Lp-PLA2 mass and dementia or Alzheimer's disease (AD) was detected in the Framingham study. The objectives of the current study were to explore the relationship of plasma Lp-PLA2 activity with cognitive diagnoses (AD, amnestic mild cognitive impairment (aMCI), and cognitively healthy subjects), cardiovascular markers, cerebrospinal fluid (CSF) markers of AD, and apolipoprotein E (APOE) genotype. METHODS: Subjects with mild AD (n = 78) and aMCI (n = 59) were recruited from the Memory Clinic, University Hospital, Basel, Switzerland; cognitively healthy subjects (n = 66) were recruited from the community. Subjects underwent standardised medical, neurological, neuropsychological, imaging, genetic, blood and CSF evaluation. Differences in Lp-PLA2 activity between the cognitive diagnosis groups were tested with ANOVA and in multiple linear regression models with adjustment for covariates. Associations between Lp-PLA2 and markers of cardiovascular disease and AD were explored with Spearman's correlation coefficients. RESULTS: There was no significant difference in plasma Lp-PLA2 activity between AD (197.1 (standard deviation, SD 38.4) nmol/min/ml) and controls (195.4 (SD 41.9)). Gender, statin use and low-density lipoprotein cholesterol (LDL) were independently associated with Lp-PLA2 activity in multiple regression models. Lp-PLA2 activity was correlated with LDL and inversely correlated with high-density lipoprotein (HDL). AD subjects with APOE-ε4 had higher Lp-PLA2 activity (207.9 (SD 41.2)) than AD subjects lacking APOE-ε4 (181.6 (SD 26.0), P = 0.003) although this was attenuated by adjustment for LDL (P = 0.09). No strong correlations were detected for Lp-PLA2 activity and CSF markers of AD. CONCLUSION: Plasma Lp-PLA2 was not associated with a diagnosis of AD or aMCI in this cross-sectional study. The main clinical correlates of Lp-PLA2 activity in AD, aMCI and cognitively healthy subjects were variables associated with lipid metabolism.
Resumo:
There is a debate on whether an influence of biotic interactions on species distributions can be reflected at macro-scale levels. Whereas the influence of biotic interactions on spatial arrangements is beginning to be studied at local scales, similar studies at macro-scale levels are scarce. There is no example disentangling, from other similarities with related species, the influence of predator-prey interactions on species distributions at macro-scale levels. In this study we aimed to disentangle predator-prey interactions from species distribution data following an experimental approach including a factorial design. As a case of study we selected the short-toed eagle because of its known specialization on certain prey reptiles. We used presence-absence data at a 100 Km2 spatial resolution to extract the explanatory capacity of different environmental predictors (five abiotic and two biotic predictors) on the short-toed eagle species distribution in Peninsular Spain. Abiotic predictors were relevant climatic and topographic variables, and relevant biotic predictors were prey richness and forest density. In addition to the short-toed eagle, we also obtained the predictor's explanatory capacities for i) species of the same family Accipitridae (as a reference), ii) for other birds of different families (as controls) and iii) species with randomly selected presences (as null models). We run 650 models to test for similarities of the short-toed eagle, controls and null models with reference species, assessed by regressions of explanatory capacities. We found higher similarities between the short-toed eagle and other species of the family Accipitridae than for the other two groups. Once corrected by the family effect, our analyses revealed a signal of predator-prey interaction embedded in species distribution data. This result was corroborated with additional analyses testing for differences in the concordance between the distributions of different bird categories and the distributions of either prey or non-prey species of the short-toed eagle. Our analyses were useful to disentangle a signal of predator-prey interactions from species distribution data at a macro-scale. This study highlights the importance of disentangling specific features from the variation shared with a given taxonomic level.
Resumo:
Objectives To prospectively assess respiratory health in wastewater workers and garbage collectors over 5 years. Methods Exposure, respiratory symptoms and conditions, spirometry and lung-specific proteins were assessed yearly in a cohort of 304 controls, 247 wastewater workers and 52 garbage collectors. Results were analysed with random coefficient models and linear regression taking into account several potential confounders. Results Symptoms, spirometry and lung-specific proteins were not affected by occupational exposure. Conclusions In this population no effects of occupational exposure to bioaerosols were found, probably because of good working conditions.
Resumo:
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Dendritic cells (DCs) are the most potent antigen-presenting cells in the human lung and are now recognized as crucial initiators of immune responses in general. They are arranged as sentinels in a dense surveillance network inside and below the epithelium of the airways and alveoli, where thet are ideally situated to sample inhaled antigen. DCs are known to play a pivotal role in maintaining the balance between tolerance and active immune response in the respiratory system. It is no surprise that the lungs became a main focus of DC-related investigations as this organ provides a large interface for interactions of inhaled antigens with the human body. During recent years there has been a constantly growing body of lung DC-related publications that draw their data from in vitro models, animal models and human studies. This review focuses on the biology and functions of different DC populations in the lung and highlights the advantages and drawbacks of different models with which to study the role of lung DCs. Furthermore, we present a number of up-to-date visualization techniques to characterize DC-related cell interactions in vitro and/or in vivo.