951 resultados para In-loop-simulations
Resumo:
We show here that nerve growth factor (NGF), the canonical neurotrophic factor, is synthesized and released by breast cancer cells. High levels of NGF transcript and protein were detected in breast cancer cells by reverse transcription-PCR, Western blotting, ELISA assay and immunohistochemistry. Conversely, NGF production could not be detected in normal breast epithelial cells at either the transcriptional or protein level. Confocal analysis indicated the presence of NGF within classical secretion vesicles. Breast cancer cell-produced NGF was biologically active, as demonstrated by its ability to induce the neuronal differentiation of embryonic neural precursor cells. Importantly, the constitutive growth of breast cancer cells was strongly inhibited by either NGF-neutralizing antibodies or K-252a, a pharmacological inhibitor of NGF receptor TrkA, indicating the existence of an NGF autocrine loop. Together, our data demonstrate the physiological relevance of NGF in breast cancer and its potential interest as a marker and therapeutic target.
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e Computadores
Resumo:
Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Biomédica
Resumo:
The loop-mediated isothermal amplification method (LAMP) is a recently developed molecular technique that amplifies nucleic acid under isothermal conditions. For malaria diagnosis, 150 blood samples from consecutive febrile malaria patients, and healthy subjects were screened in Thailand. Each sample was diagnosed by LAMP, microscopy and nested polymerase chain reaction (nPCR), using nPCR as the gold standard. Malaria LAMP was performed using Plasmodiumgenus and Plasmodium falciparum specific assays in parallel. For the genus Plasmodium, microscopy showed a sensitivity and specificity of 100%, while LAMP presented 99% of sensitivity and 93% of specificity. For P. falciparum, microscopy had a sensitivity of 95%, and LAMP of 90%, regarding the specificity; and microscopy presented 93% and LAMP 97% of specificity. The results of the genus-specific LAMP technique were highly consistent with those of nPCR and the sensitivity of P. falciparum detection was only marginally lower.
Resumo:
Magdeburg, Univ., Fak. für Maschinenbau, Diss., 2011
Resumo:
AbstractBackground:The recording of arrhythmic events (AE) in renal transplant candidates (RTCs) undergoing dialysis is limited by conventional electrocardiography. However, continuous cardiac rhythm monitoring seems to be more appropriate due to automatic detection of arrhythmia, but this method has not been used.Objective:We aimed to investigate the incidence and predictors of AE in RTCs using an implantable loop recorder (ILR).Methods:A prospective observational study conducted from June 2009 to January 2011 included 100 consecutive ambulatory RTCs who underwent ILR and were followed-up for at least 1 year. Multivariate logistic regression was applied to define predictors of AE.Results:During a mean follow-up of 424 ± 127 days, AE could be detected in 98% of patients, and 92% had more than one type of arrhythmia, with most considered potentially not serious. Sustained atrial tachycardia and atrial fibrillation occurred in 7% and 13% of patients, respectively, and bradyarrhythmia and non-sustained or sustained ventricular tachycardia (VT) occurred in 25% and 57%, respectively. There were 18 deaths, of which 7 were sudden cardiac events: 3 bradyarrhythmias, 1 ventricular fibrillation, 1 myocardial infarction, and 2 undetermined. The presence of a long QTc (odds ratio [OR] = 7.28; 95% confidence interval [CI], 2.01–26.35; p = 0.002), and the duration of the PR interval (OR = 1.05; 95% CI, 1.02–1.08; p < 0.001) were independently associated with bradyarrhythmias. Left ventricular dilatation (LVD) was independently associated with non-sustained VT (OR = 2.83; 95% CI, 1.01–7.96; p = 0.041).Conclusions:In medium-term follow-up of RTCs, ILR helped detect a high incidence of AE, most of which did not have clinical relevance. The PR interval and presence of long QTc were predictive of bradyarrhythmias, whereas LVD was predictive of non-sustained VT.
Resumo:
This paper extends previous research and discussion on the use of multivariate continuous data, which are about to become more prevalent in forensic science. As an illustrative example, attention is drawn here on the area of comparative handwriting examinations. Multivariate continuous data can be obtained in this field by analysing the contour shape of loop characters through Fourier analysis. This methodology, based on existing research in this area, allows one describe in detail the morphology of character contours throughout a set of variables. This paper uses data collected from female and male writers to conduct a comparative analysis of likelihood ratio based evidence assessment procedures in both, evaluative and investigative proceedings. While the use of likelihood ratios in the former situation is now rather well established (typically, in order to discriminate between propositions of authorship of a given individual versus another, unknown individual), focus on the investigative setting still remains rather beyond considerations in practice. This paper seeks to highlight that investigative settings, too, can represent an area of application for which the likelihood ratio can offer a logical support. As an example, the inference of gender of the writer of an incriminated handwritten text is forwarded, analysed and discussed in this paper. The more general viewpoint according to which likelihood ratio analyses can be helpful for investigative proceedings is supported here through various simulations. These offer a characterisation of the robustness of the proposed likelihood ratio methodology.
Resumo:
Metabolic problems lead to numerous failures during clinical trials, and much effort is now devoted in developing in silico models predicting metabolic stability and metabolites. Such models are well known for cytochromes P450 and some transferases, whereas little has been done to predict the hydrolytic activity of human hydrolases. The present study was undertaken to develop a computational approach able to predict the hydrolysis of novel esters by human carboxylesterase hCES1. The study involves both docking analyses of known substrates to develop predictive models, and molecular dynamics (MD) simulations to reveal the in situ behavior of substrates and products, with particular attention being paid to the influence of their ionization state. The results emphasize some crucial properties of the hCES1 catalytic cavity, confirming that as a trend with several exceptions, hCES1 prefers substrates with relatively smaller and somewhat polar alkyl/aryl groups and larger hydrophobic acyl moieties. The docking results underline the usefulness of the hydrophobic interaction score proposed here, which allows a robust prediction of hCES1 catalysis, while the MD simulations show the different behavior of substrates and products in the enzyme cavity, suggesting in particular that basic substrates interact with the enzyme in their unprotonated form.
Resumo:
Summary (in English) Computer simulations provide a practical way to address scientific questions that would be otherwise intractable. In evolutionary biology, and in population genetics in particular, the investigation of evolutionary processes frequently involves the implementation of complex models, making simulations a particularly valuable tool in the area. In this thesis work, I explored three questions involving the geographical range expansion of populations, taking advantage of spatially explicit simulations coupled with approximate Bayesian computation. First, the neutral evolutionary history of the human spread around the world was investigated, leading to a surprisingly simple model: A straightforward diffusion process of migrations from east Africa throughout a world map with homogeneous landmasses replicated to very large extent the complex patterns observed in real human populations, suggesting a more continuous (as opposed to structured) view of the distribution of modern human genetic diversity, which may play a better role as a base model for further studies. Second, the postglacial evolution of the European barn owl, with the formation of a remarkable coat-color cline, was inspected with two rounds of simulations: (i) determine the demographic background history and (ii) test the probability of a phenotypic cline, like the one observed in the natural populations, to appear without natural selection. We verified that the modern barn owl population originated from a single Iberian refugium and that they formed their color cline, not due to neutral evolution, but with the necessary participation of selection. The third and last part of this thesis refers to a simulation-only study inspired by the barn owl case above. In this chapter, we showed that selection is, indeed, effective during range expansions and that it leaves a distinguished signature, which can then be used to detect and measure natural selection in range-expanding populations. Résumé (en français) Les simulations fournissent un moyen pratique pour répondre à des questions scientifiques qui seraient inabordable autrement. En génétique des populations, l'étude des processus évolutifs implique souvent la mise en oeuvre de modèles complexes, et les simulations sont un outil particulièrement précieux dans ce domaine. Dans cette thèse, j'ai exploré trois questions en utilisant des simulations spatialement explicites dans un cadre de calculs Bayésiens approximés (approximate Bayesian computation : ABC). Tout d'abord, l'histoire de la colonisation humaine mondiale et de l'évolution de parties neutres du génome a été étudiée grâce à un modèle étonnement simple. Un processus de diffusion des migrants de l'Afrique orientale à travers un monde avec des masses terrestres homogènes a reproduit, dans une très large mesure, les signatures génétiques complexes observées dans les populations humaines réelles. Un tel modèle continu (opposé à un modèle structuré en populations) pourrait être très utile comme modèle de base dans l'étude de génétique humaine à l'avenir. Deuxièmement, l'évolution postglaciaire d'un gradient de couleur chez l'Effraie des clocher (Tyto alba) Européenne, a été examiné avec deux séries de simulations pour : (i) déterminer l'histoire démographique de base et (ii) tester la probabilité qu'un gradient phénotypique, tel qu'observé dans les populations naturelles puisse apparaître sans sélection naturelle. Nous avons montré que la population actuelle des chouettes est sortie d'un unique refuge ibérique et que le gradient de couleur ne peux pas s'être formé de manière neutre (sans l'action de la sélection naturelle). La troisième partie de cette thèse se réfère à une étude par simulations inspirée par l'étude de l'Effraie. Dans ce dernier chapitre, nous avons montré que la sélection est, en effet, aussi efficace dans les cas d'expansion d'aire de distribution et qu'elle laisse une signature unique, qui peut être utilisée pour la détecter et estimer sa force.
Resumo:
Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh-or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These ``subgrid'' elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to ``unmeasured'' topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers. Citation: Sandbach, S. D. et al. (2012), Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers, Water Resour. Res., 48, W12501, doi: 10.1029/2011WR011284.
Resumo:
One hypothesis for the origin of alkaline lavas erupted on oceanic islands and in intracontinental settings is that they represent the melts of amphibole-rich veins in the lithosphere (or melts of their dehydrated equivalents if metasomatized lithosphere is recycled into the convecting mantle). Amphibole-rich veins are interpreted as cumulates produced by crystallization of low-degree melts of the underlying asthenosphere as they ascend through the lithosphere. We present the results of trace-element modelling of the formation and melting of veins formed in this way with the goal of testing this hypothesis and for predicting how variability in the formation and subsequent melting of such cumulates (and adjacent cryptically and modally metasomatized lithospheric peridotite) would be manifested in magmas generated by such a process. Because the high-pressure phase equilibria of hydrous near-solidus melts of garnet lherzolite are poorly constrained and given the likely high variability of the hypothesized accumulation and remelting processes, we used Monte Carlo techniques to estimate how uncertainties in the model parameters (e.g. the compositions of the asthenospheric sources, their trace-element contents, and their degree of melting; the modal proportions of crystallizing phases, including accessory phases, as the asthenospheric partial melts ascend and crystallize in the lithosphere; the amount of metasomatism of the peridotitic country rock; the degree of melting of the cumulates and the amount of melt derived from the metasomatized country rock) propagate through the process and manifest themselves as variability in the trace-element contents and radiogenic isotopic ratios of model vein compositions and erupted alkaline magma compositions. We then compare the results of the models with amphibole observed in lithospheric veins and with oceanic and continental alkaline magmas. While the trace-element patterns of the near-solidus peridotite melts, the initial anhydrous cumulate assemblage (clinopyroxene +/- garnet +/- olivine +/- orthopyroxene), and the modelled coexisting liquids do not match the patterns observed in alkaline lavas, our calculations show that with further crystallization and the appearance of amphibole (and accessory minerals such as rutile, ilmenite, apatite, etc.) the calculated cumulate assemblages have trace-element patterns that closely match those observed in the veins and lavas. These calculated hydrous cumulate assemblages are highly enriched in incompatible trace elements and share many similarities with the trace-element patterns of alkaline basalts observed in oceanic or continental setting such as positive Nb/La, negative Ce/Pb, and similiar slopes of the rare earth elements. By varying the proportions of trapped liquid and thus simulating the cryptic and modal metasomatism observed in peridotite that surrounds these veins, we can model the variations in Ba/Nb, Ce/Pb, and Nb/U ratios that are observed in alkaline basalts. If the isotopic compositions of the initial low-degree peridotite melts are similar to the range observed in mid-ocean ridge basalt, our model calculations produce cumulates that would have isotopic compositions similar to those observed in most alkaline ocean island basalt (OIB) and continental magmas after similar to 0 center dot 15 Gyr. However, to produce alkaline basalts with HIMU isotopic compositions requires much longer residence times (i.e. 1-2 Gyr), consistent with subduction and recycling of metasomatized lithosphere through the mantle. such as a heterogeneous asthenosphere. These modelling results support the interpretation proposed by various researchers that amphibole-bearing veins represent cumulates formed during the differentiation of a volatile-bearing low-degree peridotite melt and that these cumulates are significant components of the sources of alkaline OIB and continental magmas. The results of the forward models provide the potential for detailed tests of this class of hypotheses for the origin of alkaline magmas worldwide and for interpreting major and minor aspects of the geochemical variability of these magmas.
Resumo:
Na,K-ATPase, the main active transport system for monovalent cations in animal cells, is responsible for maintaining Na(+) and K(+) gradients across the plasma membrane. During its transport cycle it binds three cytoplasmic Na(+) ions and releases them on the extracellular side of the membrane, and then binds two extracellular K(+) ions and releases them into the cytoplasm. The fourth, fifth, and sixth transmembrane helices of the alpha subunit of Na,K-ATPase are known to be involved in Na(+) and K(+) binding sites, but the gating mechanisms that control the access of these ions to their binding sites are not yet fully understood. We have focused on the second extracellular loop linking transmembrane segments 3 and 4 and attempted to determine its role in gating. We replaced 13 residues of this loop in the rat alpha1 subunit, from E314 to G326, by cysteine, and then studied the function of these mutants using electrophysiological techniques. We analyzed the results using a structural model obtained by homology with SERCA, and ab initio calculations for the second extracellular loop. Four mutants were markedly modified by the sulfhydryl reagent MTSET, and we investigated them in detail. The substituted cysteines were more readily accessible to MTSET in the E1 conformation for the Y315C, W317C, and I322C mutants. Mutations or derivatization of the substituted cysteines in the second extracellular loop resulted in major increases in the apparent affinity for extracellular K(+), and this was associated with a reduction in the maximum activity. The changes produced by the E314C mutation were reversed by MTSET treatment. In the W317C and I322C mutants, MTSET also induced a moderate shift of the E1/E2 equilibrium towards the E1(Na) conformation under Na/Na exchange conditions. These findings indicate that the second extracellular loop must be functionally linked to the gating mechanism that controls the access of K(+) to its binding site.