940 resultados para Essential-state models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present thesis was to investigate the influence of lower-limb joint models on musculoskeletal model predictions during gait. We started our analysis by using a baseline model, i.e., the state-of-the-art lower-limb model (spherical joint at the hip and hinge joints at the knee and ankle) created from MRI of a healthy subject in the Medical Technology Laboratory of the Rizzoli Orthopaedic Institute. We varied the models of knee and ankle joints, including: knee- and ankle joints with mean instantaneous axis of rotation, universal joint at the ankle, scaled-generic-derived planar knee, subject-specific planar knee model, subject-specific planar ankle model, spherical knee, spherical ankle. The joint model combinations corresponding to 10 musculoskeletal models were implemented into a typical inverse dynamics problem, including inverse kinematics, inverse dynamics, static optimization and joint reaction analysis algorithms solved using the OpenSim software to calculate joint angles, joint moments, muscle forces and activations, joint reaction forces during 5 walking trials. The predicted muscle activations were qualitatively compared to experimental EMG, to evaluate the accuracy of model predictions. Planar joint at the knee, universal joint at the ankle and spherical joints at the knee and at the ankle produced appreciable variations in model predictions during gait trials. The planar knee joint model reduced the discrepancy between the predicted activation of the Rectus Femoris and the EMG (with respect to the baseline model), and the reduced peak knee reaction force was considered more accurate. The use of the universal joint, with the introduction of the subtalar joint, worsened the muscle activation agreement with the EMG, and increased ankle and knee reaction forces were predicted. The spherical joints, in particular at the knee, worsened the muscle activation agreement with the EMG. A substantial increase of joint reaction forces at all joints was predicted despite of the good agreement in joint kinematics with those of the baseline model. The introduction of the universal joint had a negative effect on the model predictions. The cause of this discrepancy is likely to be found in the definition of the subtalar joint and thus, in the particular subject’s anthropometry, used to create the model and define the joint pose. We concluded that the implementation of complex joint models do not have marked effects on the joint reaction forces during gait. Computed results were similar in magnitude and in pattern to those reported in literature. Nonetheless, the introduction of planar joint model at the knee had positive effect upon the predictions, while the use of spherical joint at the knee and/or at the ankle is absolutely unadvisable, because it predicted unrealistic joint reaction forces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bivalve mollusk shells are useful tools for multi-species and multi-proxy paleoenvironmental reconstructions with a high temporal and spatial resolution. Past environmental conditions can be reconstructed from shell growth and stable oxygen and carbon isotope ratios, which present an archive for temperature, freshwater fluxes and primary productivity. The purpose of this thesis is the reconstruction of Holocene climate and environmental variations in the North Pacific with a high spatial and temporal resolution using marine bivalve shells. This thesis focuses on several different Holocene time periods and multiple regions in the North Pacific, including: Japan, Alaska (AK), British Columbia (BC) and Washington State, which are affected by the monsoon, Pacific Decadal Oscillation (PDO) and El Niño/Southern Oscillation (ENSO). Such high-resolution proxy data from the marine realm of mid- and high-latitudes are still rare. Therefore, this study contributes to the optimization and verification of climate models. However, before using bivalves for environmental reconstructions and seasonality studies, life history traits must be well studied to temporally align and interpret the geochemical record. These calibration studies are essential to ascertain the usefulness of selected bivalve species as paleoclimate proxy archives. This work focuses on two bivalve species, the short-lived Saxidomus gigantea and the long-lived Panopea abrupta. Sclerochronology and oxygen isotope ratios of different shell layers of P. abrupta were studied in order to test the reliability of this species as a climate archive. The annual increments are clearly discernable in umbonal shell portions and the increments widths should be measured in these shell portions. A reliable reconstruction of paleotemperatures may only be achieved by exclusively sampling the outer shell layer of multiple contemporaneous specimens. Life history traits (e.g., timing of growth line formation, duration of the growing season and growth rates) and stable isotope ratios of recent S. gigantea from AK and BC were analyzed in detail. Furthermore, a growth-temperature model based on S. gigantea shells from Alaska was established, which provides a better understanding of the hydrological changes related to the Alaska Coastal Current (ACC). This approach allows the independent measurement of water temperature and salinity from variations in the width of lunar daily growth increments of S. gigantea. Temperature explains 70% of the variability in shell growth. The model was calibrated and tested with modern shells and then applied to archaeological specimens. The time period between 988 and 1447 cal yrs BP was characterized by colder (~1-2°C) and much drier (2-5 PSU) summers, and a likely much slower flowing ACC than at present. In contrast, the summers during the time interval of 599-1014 cal yrs BP were colder (up to 3°C) and fresher (1-2 PSU) than today. The Aleutian Low may have been stronger and the ACC was probably flowing faster during this time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of the thesis, we propose an exactly-solvable one-dimensional model for fermions with long-range p-wave pairing decaying with distance as a power law. We studied the phase diagram by analyzing the critical lines, the decay of correlation functions and the scaling of the von Neumann entropy with the system size. We found two gapped regimes, where correlation functions decay (i) exponentially at short range and algebraically at long range, (ii) purely algebraically. In the latter the entanglement entropy is found to diverge logarithmically. Most interestingly, along the critical lines, long-range pairing breaks also the conformal symmetry. This can be detected via the dynamics of entanglement following a quench. In the second part of the thesis we studied the evolution in time of the entanglement entropy for the Ising model in a transverse field varying linearly in time with different velocities. We found different regimes: an adiabatic one (small velocities) when the system evolves according the instantaneous ground state; a sudden quench (large velocities) when the system is essentially frozen to its initial state; and an intermediate one, where the entropy starts growing linearly but then displays oscillations (also as a function of the velocity). Finally, we discussed the Kibble-Zurek mechanism for the transition between the paramagnetic and the ordered phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scopo di questa tesi é di evidenziare le connessioni tra le categorie monoidali, l'equazione di Yang-Baxter e l’integrabilità di alcuni modelli. Oggetto prinacipale del nostro lavoro é stato il monoide di Frobenius e come sia connesso alle C∗-algebre. In questo contesto la totalità delle dimostrazioni sfruttano la strumentazione dell'algebra diagrammatica. Nel corso del lavoro di tesi sono state riprodotte tali dimostrazioni tramite il più familiare linguaggio dell’algebra multilineare allo scopo di rendere più fruibili questi risultati ad un raggio più ampio di potenziali lettori.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wird die Theorie der analytischen zweiten Ableitungen für die EOMIP-CCSD-Methode formuliert sowie die durchgeführte Implementierung im Quantenchemieprogramm CFOUR beschrieben. Diese Ableitungen sind von Bedeutung bei der Bestimmung statischer Polarisierbarkeiten und harmonischer Schwingungsfrequenzen und in dieser Arbeit wird die Genauigkeit des EOMIP-CCSD-Ansatzes bei der Berechnung dieser Eigenschaften für verschiedene radikalische Systeme untersucht. Des Weiteren können mit Hilfe der ersten und zweiten Ableitungen vibronische Kopplungsparameter berechnet werden, welche zur Simulation von Molekülspektren in Kombination mit dem Köppel-Domcke-Cederbaum (KDC)-Modell - in der Arbeit am Beispiel des Formyloxyl (HCO2)-Radikals demonstriert - benötigt werden.rnrnDer konzeptionell einfache EOMIP-CC-Ansatz wurde gewählt, da hier die Wellenfunktion eines Radikalsystems ausgehend von einem stabilen geschlossenschaligen Zustand durch die Entfernung eines Elektrons gebildet wird und somit die Problematik der Symmetriebrechung umgangen werden kann. Im Rahmen der Implementierung wurden neue Programmteile zur Lösung der erforderlichen Gleichungen für die gestörten EOMIP-CC-Amplituden und die gestörten Lagrange-Multiplikatoren zeta zum Quantenchemieprogramm CFOUR hinzugefügt. Die unter Verwendung des Programms bestimmten Eigenschaften werden hinsichtlich ihrer Leistungsfähigkeit im Vergleich zu etablierten Methoden wie z.B. CCSD(T) untersucht. Bei der Berechnung von Polarisierbarkeiten und harmonischen Schwingungsfrequenzen liefert die EOMIP-CCSD-Theorie meist gute Resultate, welche nur wenig von den CCSD(T)-Ergebnissen abweichen. Einzig bei der Betrachtung von Radikalen, für die die entsprechenden Anionen nicht stabil sind (z.B. NH2⁻ und CH3⁻), liefert der EOMIP-CCSD-Ansatz aufgrund methodischer Nachteile keine aussagekräftige Beschreibung. rnrnDie Ableitungen der EOMIP-CCSD-Energie lassen sich auch zur Simulation vibronischer Kopplungen innerhalb des KDC-Modells einsetzen.rnZur Kopplung verschiedener radikalischer Zustände in einem solchen Modellpotential spielen vor allem die Ableitungen von Übergangsmatrixelementen eine wichtige Rolle. Diese sogenannten Kopplungskonstanten können in der EOMIP-CC-Theorie besonders leicht definiert und berechnet werden. Bei der Betrachtung des Photoelektronenspektrums von HCO2⁻ werden zwei Alternativen untersucht: Die vertikale Bestimmung an der Gleichgewichtsgeometrie des HCO2⁻-Anions und die Ermittlung adiabatischer Kraftkonstanten an den Gleichgewichtsgeometrien des Radikals. Lediglich das adiabatische Modell liefert bei Beschränkung auf harmonische Kraftkonstanten eine qualitativ sinnvolle Beschreibung des Spektrums. Erweitert man beide Modelle um kubische und quartische Kraftkonstanten, so nähern sich diese einander an und ermöglichen eine vollständige Zuordnung des gemessenen Spektrums innerhalb der ersten 1500 cm⁻¹. Die adiabatische Darstellung erreicht dabei nahezu quantitative Genauigkeit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future goal of modern physics is the discovery of physics beyond the Standard Model. One of the most significant hints for New Physics can be seen in the anomalous magnetic moment of the muon - one of the most precise measured variables in modern physics and the main motivation of this work. This variable is associated with the coupling of the muon, an elementary particle, to an external electromagnetic field and is defined as a = (g - 2)/2, whereas g is the gyromagnetic factor of the muon. The muon anomaly has been measured with a relative accuracy of 0.5·10-6. However, a difference between the direct measurement and the Standard Model prediction of 3.6 standard deviations can be observed. This could be a hint for the existence of New Physics. Unfortunately, it is, yet, not significant enough to claim an observation and, thus, more precise measurements and calculations have to be performed.rnThe muon anomaly has three contributions, whereas the ones from quantum electrodynamics and weak interaction can be determined from perturbative calculations. This cannot be done in case of the hadronic contributions at low energies. The leading order contribution - the hadronic vacuum polarization - can be computed via a dispersion integral, which needs as input hadronic cross section measurements from electron-positron annihilations. Hence, it is essential for a precise prediction of the muon anomaly to measure these hadronic cross sections, σ(e+e-→hadrons), with high accuracy. With a contribution of more than 70%, the final state containing two charged pions is the most important one in this context.rnIn this thesis, a new measurement of the σ(e+e-→π+π-) cross section and the pion form factor is performed with an accuracy of 0.9% in the dominant ρ(770) resonance region between 600 and rn900 MeV at the BESIII experiment. The two-pion contribution to the leading-order (LO) hadronic vacuum polarization contribution to (g - 2) from the BESIII result, obtained in this work, is computed to be a(ππ,LO,600-900 MeV) = (368.2±2.5stat±3.3sys)·10-10. With the result presented in this thesis, we make an important contribution on the way to solve the (g - 2) puzzle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of dielectric properties concerns storage and dissipation of electric and magnetic energy in materials. Dielectrics are important in order to explain various phenomena in Solid-State Physics and in Physics of Biological Materials. Indeed, during the last two centuries, many scientists have tried to explain and model the dielectric relaxation. Starting from the Kohlrausch model and passing through the ideal Debye one, they arrived at more com- plex models that try to explain the experimentally observed distributions of relaxation times, including the classical (Cole-Cole, Davidson-Cole and Havriliak-Negami) and the more recent ones (Hilfer, Jonscher, Weron, etc.). The purpose of this thesis is to discuss a variety of models carrying out the analysis both in the frequency and in the time domain. Particular attention is devoted to the three classical models, that are studied using a transcendental function known as Mittag-Leffler function. We highlight that one of the most important properties of this function, its complete monotonicity, is an essential property for the physical acceptability and realizability of the models. Lo studio delle proprietà dielettriche riguarda l’immagazzinamento e la dissipazione di energia elettrica e magnetica nei materiali. I dielettrici sono importanti al fine di spiegare vari fenomeni nell’ambito della Fisica dello Stato Solido e della Fisica dei Materiali Biologici. Infatti, durante i due secoli passati, molti scienziati hanno tentato di spiegare e modellizzare il rilassamento dielettrico. A partire dal modello di Kohlrausch e passando attraverso quello ideale di Debye, sono giunti a modelli più complessi che tentano di spiegare la distribuzione osservata sperimentalmente di tempi di rilassamento, tra i quali modelli abbiamo quelli classici (Cole-Cole, Davidson-Cole e Havriliak-Negami) e quelli più recenti (Hilfer, Jonscher, Weron, etc.). L’obiettivo di questa tesi è discutere vari modelli, conducendo l’analisi sia nel dominio delle frequenze sia in quello dei tempi. Particolare attenzione è rivolta ai tre modelli classici, i quali sono studiati utilizzando una funzione trascendente nota come funzione di Mittag-Leffler. Evidenziamo come una delle più importanti proprietà di questa funzione, la sua completa monotonia, è una proprietà essenziale per l’accettabilità fisica e la realizzabilità dei modelli.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis work proposes a new physical equivalent circuit model for a recently proposed semiconductor transistor, a 2-drain MSET (Multiple State Electrostatically Formed Nanowire Transistor). It presents a new software-based experimental setup that has been developed for carrying out numerical simulations on the device and on equivalent circuits. As of 2015, we have already approached the scaling limits of the ubiquitous CMOS technology that has been in the forefront of mainstream technological advancement, so many researchers are exploring different ideas in the realm of electrical devices for logical applications, among them MSET transistors. The idea that underlies MSETs is that a single multiple-terminal device could replace many traditional transistors. In particular a 2-drain MSET is akin to a silicon multiplexer, consisting in a Junction FET with independent gates, but with a split drain, so that a voltage-controlled conductive path can connect either of the drains to the source. The first chapter of this work presents the theory of classical JFETs and its common equivalent circuit models. The physical model and its derivation are presented, the current state of equivalent circuits for the JFET is discussed. A physical model of a JFET with two independent gates has been developed, deriving it from previous results, and is presented at the end of the chapter. A review of the characteristics of MSET device is shown in chapter 2. In this chapter, the proposed physical model and its formulation are presented. A listing for the SPICE model was attached as an appendix at the end of this document. Chapter 3 concerns the results of the numerical simulations on the device. At first the research for a suitable geometry is discussed and then comparisons between results from finite-elements simulations and equivalent circuit runs are made. Where points of challenging divergence were found between the two numerical results, the relevant physical processes are discussed. In the fourth chapter the experimental setup is discussed. The GUI-based environments that allow to explore the four-dimensional solution space and to analyze the physical variables inside the device are described. It is shown how this software project has been structured to overcome technical challenges in structuring multiple simulations in sequence, and to provide for a flexible platform for future research in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In questa tesi si è studiato l’insorgere di eventi critici in un semplice modello neurale del tipo Integrate and Fire, basato su processi dinamici stocastici markoviani definiti su una rete. Il segnale neurale elettrico è stato modellato da un flusso di particelle. Si è concentrata l’attenzione sulla fase transiente del sistema, cercando di identificare fenomeni simili alla sincronizzazione neurale, la quale può essere considerata un evento critico. Sono state studiate reti particolarmente semplici, trovando che il modello proposto ha la capacità di produrre effetti "a cascata" nell’attività neurale, dovuti a Self Organized Criticality (auto organizzazione del sistema in stati instabili); questi effetti non vengono invece osservati in Random Walks sulle stesse reti. Si è visto che un piccolo stimolo random è capace di generare nell’attività della rete delle fluttuazioni notevoli, in particolar modo se il sistema si trova in una fase al limite dell’equilibrio. I picchi di attività così rilevati sono stati interpretati come valanghe di segnale neurale, fenomeno riconducibile alla sincronizzazione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro presentato in questa Tesi si basa sul calcolo di modelli dinamici per Galassie Sferoidali Nane studiando il problema mediante l'utilizzo di funzioni di distribuzione. Si è trattato un tipo di funzioni di distribuzione, "Action-Based distribution functions", le quali sono funzioni delle sole variabili azione. Fornax è stata descritta con un'appropriata funzione di distribuzione e il problema della costruzione di modelli dinamici è stato affrontato assumendo sia un alone di materia oscura con distribuzione di densità costante nelle regioni interne sia un alone con cuspide. Per semplicità è stata assunta simmetria sferica e non è stato calcolato esplicitamente il potenziale gravitazionale della componente stellare (le stelle sono traccianti in un potenziale gravitazionale fissato). Tramite un diretto confronto con alcune osservabili, quali il profilo di densità stellare proiettata e il profilo di dispersione di velocità lungo la linea di vista, sono stati trovati alcuni modelli rappresentativi della dinamica di Fornax. Modelli calcolati tramite funzioni di distribuzione basati su azioni permettono di determinare in maniera autoconsistente profili di anisotropia. Tutti i modelli calcolati sono caratterizzati dal possedere un profilo di anisotropia con forte anisotropia tangenziale. Sono state poi comparate le stime di materia oscura di questi modelli con i più comuni e usati stimatori di massa in letteratura. E stato inoltre stimato il rapporto tra la massa totale del sistema (componente stellare e materia oscura) e la componente stellare di Fornax, entro 1600 pc ed entro i 3 kpc. Come esplorazione preliminare, in questo lavoro abbiamo anche presentato anche alcuni esempi di modelli sferici a due componenti in cui il campo gravitazionale è determinato dall'autogravità delle stelle e da un potenziale esterno che rappresenta l'alone di materia oscura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT: Normal pregnancy corresponds to a procoagulant state. Acute myocardial infarction during pregnancy is rare, yet considering the low non-pregnant risk score of childbearing women it is still surprisingly frequent. We report a case of postpartum recurrent non-ST elevation myocardial infarction in a 40-year-old caucasian woman with essential thrombocythaemia in the presence of a positive JAK-2 mutation and an elevated anti-cardiolipin IgM antibody titer. In the majority of cases of myocardial infarction in pregnancy or in the peripartal period, atherosclerosis, a thrombus or coronary artery dissection is observed. The combination of essential thrombocythaemia and elevated anti-cardiolipin IgM antibody titer in the presence of several cardiovascular risk factors seems to be causative in our case. In conclusion, with the continuing trend of childbearing at older ages, rare or unlikely conditions leading to severe events such as myocardial infarction must be considered in pregnant women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hypertension represents a complex, multifactorial disease and contributes to the major causes of morbidity and mortality in industrialized countries: ischemic and hypertensive heart disease, stroke, peripheral atherosclerosis and renal failure. Current pharmacological therapy of essential hypertension focuses on the regulation of vascular resistance by inhibition of hormones such as catecholamines and angiotensin II, blocking them from receptor activation. Interaction of G-protein coupled receptor kinases (GRKs) and regulator of G-protein signaling (RGS) proteins with activated G-protein coupled receptors (GPCRs) effect the phosphorylation state of the receptor leading to desensitization and can profoundly impair signaling. Defects in GPCR regulation via these modulators have severe consequences affecting GPCR-stimulated biological responses in pathological situations such as hypertension, since they fine-tune and balance the major transmitters of vessel constriction versus dilatation, thus representing valuable new targets for anti-hypertensive therapeutic strategies. Elevated levels of GRKs are associated with human hypertensive disease and are relevant modulators of blood pressure in animal models of hypertension. This implies therapeutic perspective in a disease that has a prevalence of 65million in the United States while being directly correlated with occurrence of major adverse cardiac and vascular events. Therefore, therapeutic approaches using the inhibition of GRKs to regulate GPCRs are intriguing novel targets for treatment of hypertension and heart failure.