934 resultados para dynamic methods
Resumo:
Proper ion channels’ functioning is a prerequisite for a normal cell and disorders involving ion channels, or channelopathies, underlie many human diseases. Long QT syndromes (LQTS) for example may arise from the malfunctioning of hERG channel, caused either by the binding of drugs or mutations in HERG gene. In the first part of this thesis I present a framework to investigate the mechanism of ion conduction through hERG channel. The free energy profile governing the elementary steps of ion translocation in the pore was computed by means of umbrella sampling simulations. Compared to previous studies, we detected a different dynamic behavior: according to our data hERG is more likely to mediate a conduction mechanism which has been referred to as “single-vacancy-like” by Roux and coworkers (2001), rather then a “knock-on” mechanism. The same protocol was applied to a model of hERG presenting the Gly628Ser mutation, found to be cause of congenital LQTS. The results provided interesting insights about the reason of the malfunctioning of the mutant channel. Since they have critical functions in viruses’ life cycle, viral ion channels, such as M2 proton channel, are considered attractive targets for antiviral therapy. A deep knowledge of the mechanisms that the virus employs to survive in the host cell is of primary importance in the identification of new antiviral strategies. In the second part of this thesis I shed light on the role that M2 plays in the control of electrical potential inside the virus, being the charge equilibration a condition required to allow proton influx. The ion conduction through M2 was simulated using metadynamics technique. Based on our results we suggest that a potential anion-mediated cation-proton exchange, as well as a direct anion-proton exchange could both contribute to explain the activity of the M2 channel.
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.
Resumo:
Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.
Resumo:
The technology of partial virtualization is a revolutionary approach to the world of virtualization. It lies directly in-between full system virtual machines (like QEMU or XEN) and application-related virtual machines (like the JVM or the CLR). The ViewOS project is the flagship of such technique, developed by the Virtual Square laboratory, created to provide an abstract view of the underlying system resources on a per-process basis and work against the principle of the Global View Assumption. Virtual Square provides several different methods to achieve partial virtualization within the ViewOS system, both at user and kernel levels. Each of these approaches have their own advantages and shortcomings. This paper provides an analysis of the different virtualization methods and problems related to both the generic and partial virtualization worlds. This paper is the result of an in-depth study and research for a new technology to be employed to provide partial virtualization based on ELF dynamic binaries. It starts with a mild analysis of currently available virtualization alternatives and then goes on describing the ViewOS system, highlighting its current shortcomings. The vloader project is then proposed as a possible solution to some of these inconveniences with a working proof of concept and examples to outline the potential of such new virtualization technique. By injecting specific code and libraries in the middle of the binary loading mechanism provided by the ELF standard, the vloader project can promote a streamlined and simplified approach to trace system calls. With the advantages outlined in the following paper, this method presents better performance and portability compared to the currently available ViewOS implementations. Furthermore, some of itsdisadvantages are also discussed, along with their possible solutions.
Resumo:
This doctoral dissertation presents a new method to asses the influence of clearancein the kinematic pairs on the configuration of planar and spatial mechanisms. The subject has been widely investigated in both past and present scientific literature, and is approached in different ways: a static/kinetostatic way, which looks for the clearance take-up due to the external loads on the mechanism; a probabilistic way, which expresses clearance-due displacements using probability density functions; a dynamic way, which evaluates dynamic effects like the actual forces in the pairs caused by impacts, or the consequent vibrations. This dissertation presents a new method to approach the problem of clearance. The problem is studied from a purely kinematic perspective. With reference to a given mechanism configuration, the pose (position and orientation) error of the mechanism link of interest is expressed as a vector function of the degrees of freedom introduced in each pair by clearance: the presence of clearance in a kinematic pair, in facts, causes the actual pair to have more degrees of freedom than the theoretical clearance-free one. The clearance-due degrees of freedom are bounded by the pair geometry. A proper modelling of clearance-affected pairs allows expressing such bounding through analytical functions. It is then possible to study the problem as a maximization problem, where a continuous function (the pose error of the link of interest) subject to some constraints (the analytical functions bounding clearance- due degrees of freedom) has to be maximize. Revolute, prismatic, cylindrical, and spherical clearance-affected pairs have been analytically modelled; with reference to mechanisms involving such pairs, the solution to the maximization problem has been obtained in a closed form.
Resumo:
Most of the problems in modern structural design can be described with a set of equation; solutions of these mathematical models can lead the engineer and designer to get info during the design stage. The same holds true for physical-chemistry; this branch of chemistry uses mathematics and physics in order to explain real chemical phenomena. In this work two extremely different chemical processes will be studied; the dynamic of an artificial molecular motor and the generation and propagation of the nervous signals between excitable cells and tissues like neurons and axons. These two processes, in spite of their chemical and physical differences, can be both described successfully by partial differential equations, that are, respectively the Fokker-Planck equation and the Hodgkin and Huxley model. With the aid of an advanced engineering software these two processes have been modeled and simulated in order to extract a lot of physical informations about them and to predict a lot of properties that can be, in future, extremely useful during the design stage of both molecular motors and devices which rely their actions on the nervous communications between active fibres.
Resumo:
Transportprozesse von anisotropen metallischen Nanopartikeln wie zum Beispiel Gold-Nanostäbchen in komplexen Flüssigkeiten und/oder begrenzten Geometrien spielen eine bedeutende Rolle in einer Vielzahl von biomedizinischen und industriellen Anwendungen. Ein Weg zu einem tiefen, grundlegenden Verständnis von Transportmechanismen ist die Verwendung zweier leistungsstarker Methoden - dynamischer Lichtstreuung (DLS) und resonanzverstärkter Lichtstreuung (REDLS) in der Nähe einer Grenzfläche. In dieser Arbeit wurden nanomolare Suspensionen von Gold-Nanostäbchen, stabilisiert mit Cetyltrimethylammoniumbromid (CTAB), mit DLS sowie in der Nähe einer Grenzfläche mit REDLS untersucht. Mit DLS wurde eine wellenlängenabhängige Verstärkung der anisotropen Streuung beobachtet, welche sich durch die Anregung von longitudinaler Oberflächenplasmonenresonanz ergibt. Die hohe Streuintensität nahe der longitudinalen Oberflächenplasmonenresonanzfrequenz für Stäbchen, welche parallel zum anregenden optischen Feld liegen, erlaubte die Auflösung der translationalen Anisotropie in einem isotropen Medium. Diese wellenlängenabhängige anisotrope Lichtstreuung ermöglicht neue Anwendungen wie etwa die Untersuchung der Dynamik einzelner Partikel in komplexen Umgebungen mittels depolarisierter dynamischer Lichtstreuung. In der Nähe einer Grenzfläche wurde eine starke Verlangsamung der translationalen Diffusion beobachtet. Hingegen zeigte sich für die Rotation zwar eine ausgeprägte aber weniger starke Verlangsamung. Um den möglichen Einfluss von Ladung auf der festen Grenzfläche zu untersuchen, wurde das Metall mit elektrisch neutralem Polymethylmethacrylat (PMMA) beschichtet. In einem weiteren Ansatz wurde das CTAB in der Gold-Nanostäbchen Lösung durch das kovalent gebundene 16-Mercaptohexadecyltrimethylammoniumbromid (MTAB) ersetzt. Daraus ergab sich eine deutlich geringere Verlangsamung.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
Auditory imagery is more than just mental “replaying” of tunes in one’s head. I will review several studies that capture characteristics of complex and active imagery tasks, using both behavioral and neuroscience approaches. I use behavioral methods to capture people’s ability to make emotion judgments about both heard and imagined music in real time. My neuroimaging studies look at the neural correlates of encoding an imagined melody, anticipating an upcoming tune, and also imagining tunes backwards. Several studies show voxel-by-voxel correlates of neural activity with self-report of imagery vividness. These studies speak to the ways in which musical imagery allows us not just to remember music, but also how we use those memories to judge temporally changing aspects of the musical experience.
Resumo:
For nonsurgical treatment of fractures of the proximal phalanges of the triphalangeal fingers, different dynamic casts have been described. The main principle behind these casts is advancement and tightening of the extensor hood, caused by a combination of blocking the metacarpophalangeal joints in flexion and actively flexing the proximal interphalangeal joints. In contrast to established treatment protocols using functional forearm casts, the Lucerne cast allows for free mobilization of the wrist joint. The purpose of the current multicenter study was to compare the results of conservative, functional treatment using 2 different methods, either a forearm cast or a Lucerne cast.
Resumo:
BACKGROUND AND PURPOSE: Perfusion CT (P-CT) is used for acute stroke management, not, however, for evaluating epilepsy. To test the hypothesis that P-CT may identify patients with increased regional cerebral blood flow during subtle status epilepticus (SSE), we compared P-CT in SSE to different postictal conditions. METHODS: Fifteen patients (mean age 47 years, range 21-74) underwent P-CT immediately after evaluation in our emergency room. Asymmetry indices between affected and unaffected hemispheres were calculated for regional cerebral blood volume (rCBV), regional cerebral blood flow (rCBF), and mean transit time (MTT). Regional perfusion changes were compared to EEG findings. RESULTS: Three patients in subtle status epilepticus (group 1) had increased regional perfusion with electro-clinical correlate. Six patients showed postictal slowing on EEG corresponding to an area of regional hypoperfusion (group 2). CT and EEG were normal in six patients with a first epileptic seizure (group 3). Cluster analysis of asymmetry indices separated SSE from the other two groups in all three parameters, while rCBF helped to distinguish between chronic focal epilepsies and single events. CONCLUSION: Preliminary results indicate that P-CT may help to identify patients with SSE during emergency workup. This technique provides important information to neurologists or emergency physicians in the difficult clinical differential diagnosis of altered mental status due to subtle status epilepticus.
Resumo:
OBJECTIVES: To test whether dynamic contour tonometry yields ocular pulse amplitude (OPA) measurements that are independent of corneal thickness and curvature, and to assess variables of observer agreement. METHODS: In a multivariate cluster analysis on 223 eyes, the relationship between central corneal thickness, corneal curvature, axial length, anterior chamber depth, intraocular pressure, sex, age, and OPA measurements was assessed. Intraobserver and interobserver variabilities were calculated from repeated measurements obtained from 8 volunteers by 4 observers. RESULTS: The OPA readings were not affected by central corneal thickness (P = .08), corneal curvature (P = .47), anterior chamber depth (P = .80), age (P = .60), or sex (P = .73). There was a positive correlation between OPA and intraocular pressure (0.12 mm Hg/1 mm Hg of intraocular pressure; P<.001) and a negative correlation between OPA and axial length (0.27 mm Hg/1 mm of length; P<.001). Intraobserver and interobserver variabilities were 0.08 and 0.02 mm Hg, respectively, and the intraclass correlation coefficient was 0.89. CONCLUSIONS: The OPA readings obtained with dynamic contour tonometry in healthy subjects are not influenced by the structure of the anterior segment of the eye but are affected by intraocular pressure and axial length. We found a high amount of agreement within and between observers.
Resumo:
PURPOSE OF REVIEW: Predicting asthma episodes is notoriously difficult but has potentially significant consequences for the individual, as well as for healthcare services. The purpose of this review is to describe recent insights into the prediction of acute asthma episodes in relation to classical clinical, functional or inflammatory variables, as well as present a new concept for evaluating asthma as a dynamically regulated homeokinetic system. RECENT FINDINGS: Risk prediction for asthma episodes or relapse has been attempted using clinical scoring systems, considerations of environmental factors and lung function, as well as inflammatory and immunological markers in induced sputum or exhaled air, and these are summarized here. We have recently proposed that newer mathematical methods derived from statistical physics may be used to understand the complexity of asthma as a homeokinetic, dynamic system consisting of a network comprising multiple components, and also to assess the risk for future asthma episodes based on fluctuation analysis of long time series of lung function. SUMMARY: Apart from the classical analysis of risk factor and functional parameters, this new approach may be used to assess asthma control and treatment effects in the individual as well as in future research trials.
Resumo:
OBJECTIVE: The aim of this study was to compare the results of tendency-oriented perimetry (TOP) and a dynamic strategy in octopus perimetry as screening methods in clinical practice. DESIGN: A prospective single centre observational case series was performed. PARTICIPANTS AND METHODS: In a newly opened general ophthalmologic practice 89 consecutive patients (171 eyes) with a clinical indication for octopus static perimetry testing (ocular hypertension or suspicious optic nerve cupping) were examined prospectively with TOP and a dynamic strategy. The visual fields were graded by 3 masked observers as normal, borderline or abnormal without any further clinical information. RESULTS: 83% eyes showed the same result for both strategies. In 14% there was a small difference (with one visual field being abnormal or normal, the other being borderline). In only 2.9% of the eyes (5 cases) was there a contradictory result. In 4 out of 5 cases the dynamic visual field was abnormal and TOP was normal. 4 of these cases came back for a second examination. In all 4 the follow-up examination showed a normal second dynamic visual field. CONCLUSIONS: Octopus static perimetry using a TOP strategy is a fast, patient-friendly and very reliable screening tool for the general ophthalmological practice. We found no false-negative results in our series.