32 resultados para Unstructured Grids
Resumo:
One of the important questions in biological evolution is to know if certain changes along protein coding genes have contributed to the adaptation of species. This problem is known to be biologically complex and computationally very expensive. It, therefore, requires efficient Grid or cluster solutions to overcome the computational challenge. We have developed a Grid-enabled tool (gcodeml) that relies on the PAML (codeml) package to help analyse large phylogenetic datasets on both Grids and computational clusters. Although we report on results for gcodeml, our approach is applicable and customisable to related problems in biology or other scientific domains.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
An equation is applied for calculating the expected persistence time of an unstructured population of the white-toothed shrew Crocidura russula from Preverenges, a suburban area in western Switzerland. Population abundance data from March and November between 1977 and 1988 were fit to the logistic density dependence model to estimate mean population growth rate as a function of population density. The variance in mean growth rate was approximated with two different models. The largest estimated persistence time was less than a few decades, the smallest less than 10 years. The results are sensitive to the magnitude of variance in population growth rate. Deviations from the logistic density dependence model in November are quite well explained by weather variables but those in March are uncorrelated with weather variables. Variability in population growth rates measured in winter months may be better explained by behavioural mechanisms. Environmental variability, dispersal of juveniles and refugia within the range of the population may contribute to its long-term survival.
Resumo:
Background: Fine particulate matter originating from traffic correlates with increased morbidity and mortality. An important source of traffic particles is brake wear of cars which contributes up to 20% of the total traffic emissions. The aim of this study was to evaluate potential toxicological effects of human epithelial lung cells exposed to freshly generated brake wear particles. Results: An exposure box was mounted around a car's braking system. Lung cells cultured at the air-liquid interface were then exposed to particles emitted from two typical braking behaviours ("full stop" and "normal deceleration"). The particle size distribution as well as the brake emission components like metals and carbons was measured on-line, and the particles deposited on grids for transmission electron microscopy were counted. The tight junction arrangement was observed by laser scanning microscopy. Cellular responses were assessed by measurement of lactate dehydrogenase (cytotoxicity), by investigating the production of reactive oxidative species and the release of the pro-inflammatory mediator interleukin-8. The tight junction protein occludin density decreased significantly (p < 0.05) with increasing concentrations of metals on the particles (iron, copper and manganese, which were all strongly correlated with each other). Occludin was also negatively correlated with the intensity of reactive oxidative species. The concentrations of interleukin-8 were significantly correlated with increasing organic carbon concentrations. No correlation was observed between occludin and interleukin-8, nor between reactive oxidative species and interleukin-8. Conclusion: These findings suggest that the metals on brake wear particles damage tight junctions with a mechanism involving oxidative stress. Brake wear particles also increase pro-inflammatory responses. However, this might be due to another mechanism than via oxidative stress. [Authors]
Resumo:
There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies. Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found. Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.
Resumo:
Asbestos is an industrial term to describe some fibrous silicate minerals, which belong to the amphiboles or serpentines group. Six minerals are defined as asbestos including: chrysotile (white asbestos), amosite (grunerite, brown asbestos), crocidolite (riebeckite, blue asbestos), anthophyllite, tremolite and actonolite, but only in their fibrous form. In 1973, the IARC (International Agency for Research on Cancer) classified the asbestos minerals as carcinogenic substances (IARC,1973). The Swiss threshold limit (VME) is 0.01 fibre/ml (SUVA, 2007). Asbestos in Switzerland has been prohibited since 1990, but this doesn't mean we are over asbestos. Up to 20'000 tonnes/year of asbestos was imported between the end of WWII and 1990. Today, all this asbestos is still present in buildings renovated or built during that period of time. During restorations, asbestos fibres can be emitted into the air. The quantification of the emission has to be evaluated accurately. To define the exact risk on workers or on the population is quite hard, as many factors must be considered. The methods to detect asbestos in the air or in materials are still being discussed today. Even though the EPA 600 method (EPA, 1993) has proved itself for the analysis of bulk materials, the method for air analysis is more problematic. In Switzerland, the recommended method is VDI 3492 using a scanning electron microscopy (SEM), but we have encountered many identifications problems with this method. For instance, overloaded filters or long-term exposed filters cannot be analysed. This is why the Institute for Work and Health (IST) has adapted the ISO10312 method: ambient air - determination of asbestos fibres - direct-transfer transmission electron microscopy (TEM) method (ISO, 1995). Quality controls have already be done at a French institute (INRS), which validate our practical experiences. The direct-transfer from MEC's filters on TEM's supports (grids) is a delicate part of the preparation for analysis and requires a lot of trials in the laboratory. IST managed to do proper grid preparations after about two years of development. In addition to the preparation of samples, the micro-analysis (EDX), the micro-diffraction and the morphologic analysis (figure 1.a-c) are also to be mastered. Theses are the three elements, which prove the different features of asbestos identification. The SEM isn't able to associate those three analyses. The TEM is also able to make the difference between artificial and natural fibres that have very similar chemical compositions as well as differentiate types of asbestos. Finally the experiments concluded by IST show that TEM is the best method to quantify and identify asbestos in the air.
Resumo:
Les instabilités engendrées par des gradients de densité interviennent dans une variété d'écoulements. Un exemple est celui de la séquestration géologique du dioxyde de carbone en milieux poreux. Ce gaz est injecté à haute pression dans des aquifères salines et profondes. La différence de densité entre la saumure saturée en CO2 dissous et la saumure environnante induit des courants favorables qui le transportent vers les couches géologiques profondes. Les gradients de densité peuvent aussi être la cause du transport indésirable de matières toxiques, ce qui peut éventuellement conduire à la pollution des sols et des eaux. La gamme d'échelles intervenant dans ce type de phénomènes est très large. Elle s'étend de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères à laquelle interviennent les phénomènes à temps long. Une reproduction fiable de la physique par la simulation numérique demeure donc un défi en raison du caractère multi-échelles aussi bien au niveau spatial et temporel de ces phénomènes. Il requiert donc le développement d'algorithmes performants et l'utilisation d'outils de calculs modernes. En conjugaison avec les méthodes de résolution itératives, les méthodes multi-échelles permettent de résoudre les grands systèmes d'équations algébriques de manière efficace. Ces méthodes ont été introduites comme méthodes d'upscaling et de downscaling pour la simulation d'écoulements en milieux poreux afin de traiter de fortes hétérogénéités du champ de perméabilité. Le principe repose sur l'utilisation parallèle de deux maillages, le premier est choisi en fonction de la résolution du champ de perméabilité (grille fine), alors que le second (grille grossière) est utilisé pour approximer le problème fin à moindre coût. La qualité de la solution multi-échelles peut être améliorée de manière itérative pour empêcher des erreurs trop importantes si le champ de perméabilité est complexe. Les méthodes adaptatives qui restreignent les procédures de mise à jour aux régions à forts gradients permettent de limiter les coûts de calculs additionnels. Dans le cas d'instabilités induites par des gradients de densité, l'échelle des phénomènes varie au cours du temps. En conséquence, des méthodes multi-échelles adaptatives sont requises pour tenir compte de cette dynamique. L'objectif de cette thèse est de développer des algorithmes multi-échelles adaptatifs et efficaces pour la simulation des instabilités induites par des gradients de densité. Pour cela, nous nous basons sur la méthode des volumes finis multi-échelles (MsFV) qui offre l'avantage de résoudre les phénomènes de transport tout en conservant la masse de manière exacte. Dans la première partie, nous pouvons démontrer que les approximations de la méthode MsFV engendrent des phénomènes de digitation non-physiques dont la suppression requiert des opérations de correction itératives. Les coûts de calculs additionnels de ces opérations peuvent toutefois être compensés par des méthodes adaptatives. Nous proposons aussi l'utilisation de la méthode MsFV comme méthode de downscaling: la grille grossière étant utilisée dans les zones où l'écoulement est relativement homogène alors que la grille plus fine est utilisée pour résoudre les forts gradients. Dans la seconde partie, la méthode multi-échelle est étendue à un nombre arbitraire de niveaux. Nous prouvons que la méthode généralisée est performante pour la résolution de grands systèmes d'équations algébriques. Dans la dernière partie, nous focalisons notre étude sur les échelles qui déterminent l'évolution des instabilités engendrées par des gradients de densité. L'identification de la structure locale ainsi que globale de l'écoulement permet de procéder à un upscaling des instabilités à temps long alors que les structures à petite échelle sont conservées lors du déclenchement de l'instabilité. Les résultats présentés dans ce travail permettent d'étendre les connaissances des méthodes MsFV et offrent des formulations multi-échelles efficaces pour la simulation des instabilités engendrées par des gradients de densité. - Density-driven instabilities in porous media are of interest for a wide range of applications, for instance, for geological sequestration of CO2, during which CO2 is injected at high pressure into deep saline aquifers. Due to the density difference between the C02-saturated brine and the surrounding brine, a downward migration of CO2 into deeper regions, where the risk of leakage is reduced, takes place. Similarly, undesired spontaneous mobilization of potentially hazardous substances that might endanger groundwater quality can be triggered by density differences. Over the last years, these effects have been investigated with the help of numerical groundwater models. Major challenges in simulating density-driven instabilities arise from the different scales of interest involved, i.e., the scale at which instabilities are triggered and the aquifer scale over which long-term processes take place. An accurate numerical reproduction is possible, only if the finest scale is captured. For large aquifers, this leads to problems with a large number of unknowns. Advanced numerical methods are required to efficiently solve these problems with today's available computational resources. Beside efficient iterative solvers, multiscale methods are available to solve large numerical systems. Originally, multiscale methods have been developed as upscaling-downscaling techniques to resolve strong permeability contrasts. In this case, two static grids are used: one is chosen with respect to the resolution of the permeability field (fine grid); the other (coarse grid) is used to approximate the fine-scale problem at low computational costs. The quality of the multiscale solution can be iteratively improved to avoid large errors in case of complex permeability structures. Adaptive formulations, which restrict the iterative update to domains with large gradients, enable limiting the additional computational costs of the iterations. In case of density-driven instabilities, additional spatial scales appear which change with time. Flexible adaptive methods are required to account for these emerging dynamic scales. The objective of this work is to develop an adaptive multiscale formulation for the efficient and accurate simulation of density-driven instabilities. We consider the Multiscale Finite-Volume (MsFV) method, which is well suited for simulations including the solution of transport problems as it guarantees a conservative velocity field. In the first part of this thesis, we investigate the applicability of the standard MsFV method to density- driven flow problems. We demonstrate that approximations in MsFV may trigger unphysical fingers and iterative corrections are necessary. Adaptive formulations (e.g., limiting a refined solution to domains with large concentration gradients where fingers form) can be used to balance the extra costs. We also propose to use the MsFV method as downscaling technique: the coarse discretization is used in areas without significant change in the flow field whereas the problem is refined in the zones of interest. This enables accounting for the dynamic change in scales of density-driven instabilities. In the second part of the thesis the MsFV algorithm, which originally employs one coarse level, is extended to an arbitrary number of coarse levels. We prove that this keeps the MsFV method efficient for problems with a large number of unknowns. In the last part of this thesis, we focus on the scales that control the evolution of density fingers. The identification of local and global flow patterns allows a coarse description at late times while conserving fine-scale details during onset stage. Results presented in this work advance the understanding of the Multiscale Finite-Volume method and offer efficient dynamic multiscale formulations to simulate density-driven instabilities. - Les nappes phréatiques caractérisées par des structures poreuses et des fractures très perméables représentent un intérêt particulier pour les hydrogéologues et ingénieurs environnementaux. Dans ces milieux, une large variété d'écoulements peut être observée. Les plus communs sont le transport de contaminants par les eaux souterraines, le transport réactif ou l'écoulement simultané de plusieurs phases non miscibles, comme le pétrole et l'eau. L'échelle qui caractérise ces écoulements est définie par l'interaction de l'hétérogénéité géologique et des processus physiques. Un fluide au repos dans l'espace interstitiel d'un milieu poreux peut être déstabilisé par des gradients de densité. Ils peuvent être induits par des changements locaux de température ou par dissolution d'un composé chimique. Les instabilités engendrées par des gradients de densité revêtent un intérêt particulier puisque qu'elles peuvent éventuellement compromettre la qualité des eaux. Un exemple frappant est la salinisation de l'eau douce dans les nappes phréatiques par pénétration d'eau salée plus dense dans les régions profondes. Dans le cas des écoulements gouvernés par les gradients de densité, les échelles caractéristiques de l'écoulement s'étendent de l'échelle poreuse où les phénomènes de croissance des instabilités s'opèrent, jusqu'à l'échelle des aquifères sur laquelle interviennent les phénomènes à temps long. Etant donné que les investigations in-situ sont pratiquement impossibles, les modèles numériques sont utilisés pour prédire et évaluer les risques liés aux instabilités engendrées par les gradients de densité. Une description correcte de ces phénomènes repose sur la description de toutes les échelles de l'écoulement dont la gamme peut s'étendre sur huit à dix ordres de grandeur dans le cas de grands aquifères. Il en résulte des problèmes numériques de grande taille qui sont très couteux à résoudre. Des schémas numériques sophistiqués sont donc nécessaires pour effectuer des simulations précises d'instabilités hydro-dynamiques à grande échelle. Dans ce travail, nous présentons différentes méthodes numériques qui permettent de simuler efficacement et avec précision les instabilités dues aux gradients de densité. Ces nouvelles méthodes sont basées sur les volumes finis multi-échelles. L'idée est de projeter le problème original à une échelle plus grande où il est moins coûteux à résoudre puis de relever la solution grossière vers l'échelle de départ. Cette technique est particulièrement adaptée pour résoudre des problèmes où une large gamme d'échelle intervient et évolue de manière spatio-temporelle. Ceci permet de réduire les coûts de calculs en limitant la description détaillée du problème aux régions qui contiennent un front de concentration mobile. Les aboutissements sont illustrés par la simulation de phénomènes tels que l'intrusion d'eau salée ou la séquestration de dioxyde de carbone.
Resumo:
α-Synuclein aggregation and accumulation in Lewy bodies are implicated in progressive loss of dopaminergic neurons in Parkinson disease and related disorders. In neurons, the Hsp70s and their Hsp40-like J-domain co-chaperones are the only known components of chaperone network that can use ATP to convert cytotoxic protein aggregates into harmless natively refolded polypeptides. Here we developed a protocol for preparing a homogeneous population of highly stable β-sheet enriched toroid-shaped α-Syn oligomers with a diameter typical of toxic pore-forming oligomers. These oligomers were partially resistant to in vitro unfolding by the bacterial Hsp70 chaperone system (DnaK, DnaJ, GrpE). Moreover, both bacterial and human Hsp70/Hsp40 unfolding/refolding activities of model chaperone substrates were strongly inhibited by the oligomers but, remarkably, not by unstructured α-Syn monomers even in large excess. The oligomers acted as a specific competitive inhibitor of the J-domain co-chaperones, indicating that J-domain co-chaperones may preferably bind to exposed bulky misfolded structures in misfolded proteins and, thus, complement Hsp70s that bind to extended segments. Together, our findings suggest that inhibition of the Hsp70/Hsp40 chaperone system by α-Syn oligomers may contribute to the disruption of protein homeostasis in dopaminergic neurons, leading to apoptosis and tissue loss in Parkinson disease and related neurodegenerative diseases.
Resumo:
Skeletal muscle mitochondrial (Mito) and lipid droplet (Lipid) content are often measured in human translational studies. Stereological point counting allows computing Mito and Lipid volume density (Vd) from micrographs taken with transmission electron microscopes. Former studies are not specific as to the size of individual squares that make up the grids, making reproducibility difficult, particularly when different magnifications are used. Our objective was to determine which size grid would be best at predicting fractional volume efficiently without sacrificing reliability and to test a novel method to reduce sampling bias. Methods: ten subjects underwent vastus lateralis biopsies. Samples were fixed, embedded, and cut longitudinally in ultrathin sections of 60 nm. Twenty micrographs from the intramyofibrillar region were taken per subject at Ã-33,000 magnification. Different grid sizes were superimposed on each micrograph: 1,000 Ã- 1,000 nm, 500 Ã- 500 nm, and 250 Ã- 250 nm. Results: mean Mito and Lipid Vd were not statistically different across grids. Variability was greater when going from 1,000 Ã- 1,000 to 500 Ã- 500 nm grid than from 500 Ã- 500 to 250 Ã- 250 nm grid. Discussion: this study is the first to attempt to standardize grid size while keeping with the conventional stereology principles. This is all in hopes of producing replicable assessments that can be obtained universally across different studies looking at human skeletal muscle mitochondrial and lipid droplet content.
Resumo:
SPP1-encoded replicative DNA helicase gene 40 product (G40P) is an essential product for phage replication. Hexameric G40P, in the presence of AMP-PNP, preferentially binds unstructured single-stranded (ss)DNA in a sequence-independent manner. The efficiency of ssDNA binding, nucleotide hydrolysis and the unwinding activity of G40P are affected in a different manner by different nucleotide cofactors. Nuclease protection studies suggest that G40P protects the 5' tail of a forked molecule, and the duplex region at the junction against exonuclease attack. G40P does not protect the 3' tail of a forked molecule from exonuclease attack. By using electron microscopy we confirm that the ssDNA transverses the centre of the hexameric ring. Our results show that hexameric G40P DNA helicase encircles the 5' tail, interacts with the duplex DNA at the ss-double-stranded DNA junction and excludes the 3' tail of the forked DNA.
Resumo:
In mammography, the image contrast and dose delivered to the patient are determined by the x-ray spectrum and the scatter to primary ratio S/P. Thus the quality of the mammographic procedure is highly dependent on the choice of anode and filter material and on the method used to reduce the amount of scattered radiation reaching the detector. Synchrotron radiation is a useful tool to study the effect of beam energy on the optimization of the mammographic process because it delivers a high flux of monochromatic photons. Moreover, because the beam is naturally flat collimated in one direction, a slot can be used instead of a grid for scatter reduction. We have measured the ratio S/P and the transmission factors for grids and slots for monoenergetic synchrotron radiation. In this way the effect of beam energy and scatter rejection method were separated, and their respective importance for image quality and dose analyzed. Our results show that conventional mammographic spectra are not far from optimum and that the use of a slot instead of a grid has an important effect on the optimization of the mammographic process. We propose a simple numerical model to quantify this effect.
Resumo:
BACKGROUND: Evaluation of syncope remains often unstructured. The aim of the study was to assess the effectiveness of a standardized protocol designed to improve the diagnosis of syncope. METHODS: Consecutive patients with syncope presenting to the emergency departments of two primary and tertiary care hospitals over a period of 18 months underwent a two-phase evaluation including: 1) noninvasive assessment (phase I); and 2) specialized tests (phase II), if syncope remained unexplained after phase I. During phase II, the evaluation strategy was alternately left to physicians in charge of patients (control), or guided by a standardized protocol relying on cardiac status and frequency of events (intervention). The primary outcomes were the diagnostic yield of each phase, and the impact of the intervention (phase II) measured by multivariable analysis. RESULTS: Among 1725 patients with syncope, 1579 (92%) entered phase I which permitted to establish a diagnosis in 1061 (67%) of them, including mainly reflex causes and orthostatic hypotension. Five-hundred-eighteen patients (33%) were considered as having unexplained syncope and 363 (70%) entered phase II. A cause for syncope was found in 67 (38%) of 174 patients during intervention periods, compared to 18 (9%) of 189 during control (p<0.001). Compared to control periods, intervention permitted diagnosing more cardiac (8%, vs 3%, p=0.04) and reflex syncope (25% vs 6%, p<0.001), and increased the odds of identifying a cause for syncope by a factor of 4.5 (95% CI: 2.6-8.7, p<0.001). Overall, adding the diagnostic yield obtained during phase I and phase II (intervention periods) permitted establishing the cause of syncope in 76% of patients. CONCLUSION: Application of a standardized diagnostic protocol in patients with syncope improved the likelihood of identifying a cause for this symptom. Future trials should assess the efficacy of diagnosis-specific therapy.
Resumo:
The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.
Resumo:
Surface-based ground penetrating radar (GPR) and electrical resistance tomography (ERT) are common tools for aquifer characterization, because both methods provide data that are sensitive to hydrogeologically relevant quantities. To retrieve bulk subsurface properties at high resolution, we suggest incorporating structural information derived from GPR reflection data when inverting surface ERT data. This reduces resolution limitations, which might hinder quantitative interpretations. Surface-based GPR reflection and ERT data have been recorded on an exposed gravel bar within a restored section of a previously channelized river in northeastern Switzerland to characterize an underlying gravel aquifer. The GPR reflection data acquired over an area of 240×40 m map the aquifer's thickness and two internal sub-horizontal regions with different depositional patterns. The interface between these two regions and the boundary of the aquifer with then underlying clay are incorporated in an unstructured ERT mesh. Subsequent inversions are performed without applying smoothness constraints across these boundaries. Inversion models obtained by using these structural constraints contain subtle resistivity variations within the aquifer that are hardly visible in standard inversion models as a result of strong vertical smearing in the latter. In the upper aquifer region, with high GPR coherency and horizontal layering, the resistivity is moderately high (N300 Ωm). We suggest that this region consists of sediments that were rearranged during more than a century of channelized flow. In the lower low coherency region, the GPR image reveals fluvial features (e.g., foresets) and generally more heterogeneous deposits. In this region, the resistivity is lower (~200 Ωm), which we attribute to increased amounts of fines in some of the well-sorted fluvial deposits. We also find elongated conductive anomalies that correspond to the location of river embankments that were removed in 2002.
Resumo:
We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.