890 resultados para human patient simulation
Resumo:
Human transthyretin (hTTR) is a multifunctional protein that is involved in several neurodegenerative diseases. Besides the transportation of thyroxin and vitamin A, it is also involved in the proteolysis of apolipoprotein A1 and A beta peptide. Extensive analyses of 32 high-resolution X-ray and neutron diffraction structures of hTTR followed by molecular-dynamics simulation studies using a set of 15 selected structures affirmed the presence of 44 conserved water molecules in its dimeric structure. They are found to play several important roles in the structure and function of the protein. Eight water molecules stabilize the dimeric structure through an extensive hydrogen-bonding network. The absence of some of these water molecules in highly acidic conditions (pH <= 4.0) severely affects the interfacial hydrogen-bond network, which may destabilize the native tetrameric structure, leading to its dissociation. Three pairs of conserved water molecules contribute to maintaining the geometry of the ligand-binding cavities. Some other water molecules control the orientation and dynamics of different structural elements of hTTR. This systematic study of the location, absence, networking and interactions of the conserved water molecules may shed some light on various structural and functional aspects of the protein. The present study may also provide some rational clues about the conserved water-mediated architecture and stability of hTTR.
Resumo:
Human Guanine Monophosphate Synthetase (hGMPS) converts XMP to GMP, and acts as a bifunctional enzyme with N-terminal ``glutaminase'' (GAT) and C-terminal ``synthetase'' domain. The enzyme is identified as a potential target for anticancer and immunosuppressive therapies. GAT domain of enzyme plays central role in metabolism, and contains conserved catalytic residues Cys104, His190, and Glu192. MD simulation studies on GAT domain suggest that position of oxyanion in unliganded conformation is occupied by one conserved water molecule (W1), which also stabilizes that pocket. This position is occupied by a negatively charged atom of the substrate or ligand in ligand bound crystal structures. In fact, MD simulation study of Ser75 to Val indicates that W1 conserved water molecule is stabilized by Ser75, while Thr152, and His190 also act as anchor residues to maintain appropriate architecture of oxyanion pocket through water mediated H-bond interactions. Possibly, four conserved water molecules stabilize oxyanion hole in unliganded state, but they vacate these positions when the enzyme (hGMPS)-substrate complex is formed. Thus this study not only reveals functionally important role of conserved water molecules in GAT domain, but also highlights essential role of other non-catalytic residues such as Ser75 and Thr152 in this enzymatic domain. The results from this computational study could be of interest to experimental community and provide a testable hypothesis for experimental validation. Conserved sites of water molecules near and at oxyanion hole highlight structural importance of water molecules and suggest a rethink of the conventional definition of chemical geometry of inhibitor binding site.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
In sensorimotor integration, sensory input and motor output signals are combined to provide an internal estimate of the state of both the world and one's own body. Although a single perceptual and motor snapshot can provide information about the current state, computational models show that the state can be optimally estimated by a recursive process in which an internal estimate is maintained and updated by the current sensory and motor signals. These models predict that an internal state estimate is maintained or stored in the brain. Here we report a patient with a lesion of the superior parietal lobe who shows both sensory and motor deficits consistent with an inability to maintain such an internal representation between updates. Our findings suggest that the superior parietal lobe is critical for sensorimotor integration, by maintaining an internal representation of the body's state.
Resumo:
Over the past decade, a variety of user models have been proposed for user simulation-based reinforcement-learning of dialogue strategies. However, the strategies learned with these models are rarely evaluated in actual user trials and it remains unclear how the choice of user model affects the quality of the learned strategy. In particular, the degree to which strategies learned with a user model generalise to real user populations has not be investigated. This paper presents a series of experiments that qualitatively and quantitatively examine the effect of the user model on the learned strategy. Our results show that the performance and characteristics of the strategy are in fact highly dependent on the user model. Furthermore, a policy trained with a poor user model may appear to perform well when tested with the same model, but fail when tested with a more sophisticated user model. This raises significant doubts about the current practice of learning and evaluating strategies with the same user model. The paper further investigates a new technique for testing and comparing strategies directly on real human-machine dialogues, thereby avoiding any evaluation bias introduced by the user model. © 2005 IEEE.
Resumo:
This paper discusses the application of Discrete Event Simulation (DES) in modelling the complex relationship between patient types, case-mix and operating theatre allocation in a large National Health Service (NHS) Trust in London. The simulation model that was constructed described the main features of nine theatres, focusing on operational processes and patient throughput times. The model was used to test three scenarios of case-mix and to demonstrate the potential of using simulation modelling as a cost effective method for understanding the issues of healthcare operations management and the role of simulation techniques in problem solving. The results indicated that removing all day cases will reduce patient throughput by 23.3% and the utilization of the orthopaedic theatre in particular by 6.5%. This represents a case example of how DES can be used by healthcare managers to inform decision making. © 2008 IEEE.
Resumo:
Inflatable aerodynamic decelerators have potential advantages for planetary re-entry in robotic and human exploration missions. It is theorized that volume-mass characteristics of these decelerators are superior to those of common supersonic/subsonic parachutes and after deployment they may suffer no instabilities at high Mach numbers. A high fidelity computational fluid-structure interaction model is employed to investigate the behavior of tension cone inflatable aeroshells at supersonic speeds up to Mach 2.0. The computational framework targets the large displacements regime encountered during the inflation of the decelerator using fast level set techniques to incorporate boundary conditions of the moving structure. The preliminary results indicate large but steady aeroshell displacement with rich dynamics, including buckling of the inflatable torus that maintains the decelerator open under normal operational conditions, owing to interactions with the turbulent wake. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
Inflatable aerodynamic decelerators have potential advantages for planetary re-entry in robotic and human exploration missions. In this paper, we focus on an inflatable tension cone design that has potential advantages over other geometries. A computational fluid-structure interaction model of a tension cone is employed to investigate the behavior of the inflatable aeroshell at supersonic speeds for conditions matching recent experimental results. A parametric study is carried out to investigate the deflections of the tension cone as a function of inflation pressure of the torus at a Mach of 2.5. Comparison of the behavior of the structure, amplitude of deformations, and determined loads are reported. © 2010 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
The entry of human immunodeficiency virus (HIV) into cells depends on a sequential interaction of the gp120 envelope glycoprotein with the cellular receptors CD4 and members of the chemokine receptor family. The CC chemokine receptor CCR5 is such a receptor for several chemokines and a major coreceptor for the entry of R5 HIV type-1 (HIV-1) into cells. Although many studies focus on the interaction of CCR5 with HIV-1, the corresponding interaction sites in CCR5 and gp120 have not been matched. Here we used an approach combining protein structure modeling, docking and molecular dynamics simulation to build a series of structural models of the CCR5 in complexes with gp120 and CD4. Interactions such as hydrogen bonds, salt bridges and van der Waals contacts between CCR5 and gp120 were investigated. Three snapshots of CCR5-gp120-CD4 models revealed that the initial interactions of CCR5 with gp120 are involved in the negatively charged N-terminus (Nt) region of CCR5 and positively charged bridging sheet region of gp120. Further interactions occurred between extracellular loop2 (ECL2) of CCR5 and the base of V3 loop regions of gp120. These interactions may induce the conformational changes in gp120 and lead to the final entry of HIV into the cell. These results not only strongly support the two-step gp120-CCR5 binding mechanism, but also rationalize extensive biological data about the role of CCR5 in HIV-1 gp120 binding and entry, and may guide efforts to design novel inhibitors.
Resumo:
Elderly and disabled people can be hugely benefited through the advancement of modern electronic devices, as those can help them to engage more fully with the world. However, existing design practices often isolate elderly or disabled users by considering them as users with special needs. This article presents a simulator that can reflect problems faced by elderly and disabled users while they use computer, television, and similar electronic devices. The simulator embodies both the internal state of an application and the perceptual, cognitive, and motor processes of its user. It can help interface designers to understand, visualize, and measure the effect of impairment on interaction with an interface. Initially a brief survey of different user modeling techniques is presented, and then the existing models are classified into different categories. In the context of existing modeling approaches the work on user modeling is presented for people with a wide range of abilities. A few applications of the simulator, which shows the predictions are accurate enough to make design choices and point out the implication and limitations of the work, are also discussed. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
Atlases and statistical models play important roles in the personalization and simulation of cardiac physiology. For the study of the heart, however, the construction of comprehensive atlases and spatio-temporal models is faced with a number of challenges, in particular the need to handle large and highly variable image datasets, the multi-region nature of the heart, and the presence of complex as well as small cardiovascular structures. In this paper, we present a detailed atlas and spatio-temporal statistical model of the human heart based on a large population of 3D+time multi-slice computed tomography sequences, and the framework for its construction. It uses spatial normalization based on nonrigid image registration to synthesize a population mean image and establish the spatial relationships between the mean and the subjects in the population. Temporal image registration is then applied to resolve each subject-specific cardiac motion and the resulting transformations are used to warp a surface mesh representation of the atlas to fit the images of the remaining cardiac phases in each subject. Subsequently, we demonstrate the construction of a spatio-temporal statistical model of shape such that the inter-subject and dynamic sources of variation are suitably separated. The framework is applied to a 3D+time data set of 138 subjects. The data is drawn from a variety of pathologies, which benefits its generalization to new subjects and physiological studies. The obtained level of detail and the extendability of the atlas present an advantage over most cardiac models published previously. © 1982-2012 IEEE.