12 resultados para Minimal Supersymmetric Standard Model (MSSM)
em Université de Lausanne, Switzerland
Resumo:
Introduction: Although the pig is a standard model for the evaluation of various diseases in humans, including coagulopathy, it is not clear whether results in animals can be extrapolated to man.Materials and methods: In 75 anesthetized pigs, we assessed reagent-supported thrombelastometry (ExTEM (R)), platelet-blocked thrombelastometry (FibTEM (R)), and aprotinin thrombelastometry (ApTEM (R)). Results were compared to values from 13 anesthetized humans.Results (median, 95% CI): ExTEM (R) : While clot strength was comparable in pigs (66 mm, 65-67 mm) and in humans (64 mm, 60-68 mm; NS), clotting time in animals was longer (pigs 64 s, 62-66 s; humans 55 s, 49-71 s; P<0.05) and clot formation time shorter (pigs 52 s, 49-54 s; humans 83 s, 67-98 s, P<0.001). The clot lysis index at 30 minutes was lower in animals (96.9%, 95.1-97.3%) than in humans (99.5%, 98.6-99.9%; P<0.001). ApTEM (R) showed no hyperfibrinolysis in animals. Modification of the anesthesia protocol in animals resulted in significant ExTEM (R) changes. FibTEM (R) : Complete platelet inhibition yielded significantly higher platelet contribution to clot strength in pigs (79%, 76-81%) than in humans (73%, 71-77%; P<0.05), whereas fibrinogen contribution to clot strength was higher in humans (27%, 24-29%) than in animals (21%, 19-24%; P<0.05).Conclusions: Maximum clot firmness is comparable in human and porcine blood. However, clot lysis, platelet and fibrinogen contribution to clot strength, as well as initiation and propagation of clotting, are considerably different between pigs and humans. In addition, anesthesic drugs seem to influence thrombelastometry in animals. Accordingly, coagulation abnormalities in pigs subjected to diseases may not necessarily represent the coagulation profile in sick patients. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.
Resumo:
PURPOSE: EEG and somatosensory evoked potential are highly predictive of poor outcome after cardiac arrest; their accuracy for good recovery is however low. We evaluated whether addition of an automated mismatch negativity-based auditory discrimination paradigm (ADP) to EEG and somatosensory evoked potential improves prediction of awakening. METHODS: EEG and ADP were prospectively recorded in 30 adults during therapeutic hypothermia and in normothermia. We studied the progression of auditory discrimination on single-trial multivariate analyses from therapeutic hypothermia to normothermia, and its correlation to outcome at 3 months, assessed with cerebral performance categories. RESULTS: At 3 months, 18 of 30 patients (60%) survived; 5 had severe neurologic impairment (cerebral performance categories = 3) and 13 had good recovery (cerebral performance categories = 1-2). All 10 subjects showing improvements of auditory discrimination from therapeutic hypothermia to normothermia regained consciousness: ADP was 100% predictive for awakening. The addition of ADP significantly improved mortality prediction (area under the curve, 0.77 for standard model including clinical examination, EEG, somatosensory evoked potential, versus 0.86 after adding ADP, P = 0.02). CONCLUSIONS: This automated ADP significantly improves early coma prognostic accuracy after cardiac arrest and therapeutic hypothermia. The progression of auditory discrimination is strongly predictive of favorable recovery and appears complementary to existing prognosticators of poor outcome. Before routine implementation, validation on larger cohorts is warranted.
Resumo:
The function of most proteins is not determined experimentally, but is extrapolated from homologs. According to the "ortholog conjecture", or standard model of phylogenomics, protein function changes rapidly after duplication, leading to paralogs with different functions, while orthologs retain the ancestral function. We report here that a comparison of experimentally supported functional annotations among homologs from 13 genomes mostly supports this model. We show that to analyze GO annotation effectively, several confounding factors need to be controlled: authorship bias, variation of GO term frequency among species, variation of background similarity among species pairs, and propagated annotation bias. After controlling for these biases, we observe that orthologs have generally more similar functional annotations than paralogs. This is especially strong for sub-cellular localization. We observe only a weak decrease in functional similarity with increasing sequence divergence. These findings hold over a large diversity of species; notably orthologs from model organisms such as E. coli, yeast or mouse have conserved function with human proteins.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
We use cryo-electron microscopy (cryo-EM) to study the 3D shapes of 94-bp-long DNA minicircles and address the question of whether cyclization of such short DNA molecules necessitates the formation of sharp, localized kinks in DNA or whether the necessary bending can be redistributed and accomplished within the limits of the elastic, standard model of DNA flexibility. By comparing the shapes of covalently closed, nicked and gapped DNA minicircles, we conclude that 94-bp-long covalently closed and nicked DNA minicircles do not show sharp kinks while gapped DNA molecules, containing very flexible single-stranded regions, do show sharp kinks. We corroborate the results of cryo-EM studies by using Bal31 nuclease to probe for the existence of kinks in 94-bp-long minicircles.
Resumo:
L'expérience LHCb sera installée sur le futur accélérateur LHC du CERN. LHCb est un spectromètre à un bras consacré aux mesures de précision de la violation CP et à l'étude des désintégrations rares des particules qui contiennent un quark b. Actuellement LHCb se trouve dans la phase finale de recherche et développement et de conception. La construction a déjà commencé pour l'aimant et les calorimètres. Dans le Modèle Standard, la violation CP est causée par une phase complexe dans la matrice 3x3 CKM (Cabibbo-Kobayashi-Maskawa) de mélange des quarks. L'expérience LHCb compte utiliser les mesons B pour tester l'unitarité de cette matrice, en mesurant de diverses manières indépendantes tous les angles et côtés du "triangle d'unitarité". Cela permettra de surdéterminer le modèle et, peut-être, de mettre en évidence des incohérences qui seraient le signal de l'existence d'une physique au-delà du Modèle Standard. La reconstruction du vertex de désintégration des particules est une condition fondamentale pour l'expérience LHCb. La présence d'un vertex secondaire déplacé est une signature de la désintégration de particules avec un quark b. Cette signature est utilisée dans le trigger topologique du LHCb. Le Vertex Locator (VeLo) doit fournir des mesures précises de coordonnées de passage des traces près de la région d'interaction. Ces points sont ensuite utilisés pour reconstruire les trajectoires des particules et l'identification des vertices secondaires et la mesure des temps de vie des hadrons avec quark b. L'électronique du VeLo est une partie essentielle du système d'acquisition de données et doit se conformer aux spécifications de l'électronique de LHCb. La conception des circuits doit maximiser le rapport signal/bruit pour obtenir la meilleure performance de reconstruction des traces dans le détecteur. L'électronique, conçue en parallèle avec le développement du détecteur de silicium, a parcouru plusieurs phases de "prototyping" décrites dans cette thèse.<br/><br/>The LHCb experiment is being built at the future LHC accelerator at CERN. It is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b quark sector. Presently it is finishing its R&D and final design stage. The construction already started for the magnet and calorimeters. In the Standard Model, CP violation arises via the complex phase of the 3 x 3 CKM (Cabibbo-Kobayashi-Maskawa) quark mixing matrix. The LHCb experiment will test the unitarity of this matrix by measuring in several theoretically unrelated ways all angles and sides of the so-called "unitary triangle". This will allow to over-constrain the model and - hopefully - to exhibit inconsistencies which will be a signal of physics beyond the Standard Model. The Vertex reconstruction is a fundamental requirement for the LHCb experiment. Displaced secondary vertices are a distinctive feature of b-hadron decays. This signature is used in the LHCb topology trigger. The Vertex Locator (VeLo) has to provide precise measurements of track coordinates close to the interaction region. These are used to reconstruct production and decay vertices of beauty-hadrons and to provide accurate measurements of their decay lifetimes. The Vertex Locator electronics is an essential part of the data acquisition system and must conform to the overall LHCb electronics specification. The design of the electronics must maximise the signal to noise ratio in order to achieve the best tracking reconstruction performance in the detector. The electronics is being designed in parallel with the silicon detector development and went trough several prototyping phases, which are described in this thesis.
Resumo:
L'expérience Belle, située dans le centre de recherche du KEK, au Japon, est consacrée principalement à l'étude de la violation de CP dans le système des mésons B. Elle est placée sur le collisionneur KEKB, qui produit des paires Banti-B. KEKB, l'une des deux « usines à B » actuellement en fonction, détient le record du nombre d'événements produits avec plus de 150 millions de paires. Cet échantillon permet des mesures d'une grande précision dans le domaine de la physique du méson B. C'est dans le cadre de ces mesures de précision que s'inscrit cette analyse. L'un des phénomènes remarquables de la physique des hautes énergies est la faculté qu'a l'interaction faible de coupler un méson neutre avec son anti-méson. Dans le présent travail, nous nous intéressons au méson B neutre couplé à l'anti-méson B neutre, avec une fréquence d'oscillation _md mesurable précisément. Outre la beauté de ce phénomène lui-même, une telle mesure trouve sa place dans la quête de l'origine de la violation de CP. Cette dernière n'est incluse que d'une façon peu satisfaisante dans le modèle standard des interactions électro-faibles. C'est donc la recherche de phénomènes physiques encore inexpliqués qui motive en premier lieu la collaboration Belle. Il existe déjà de nombreuses mesures de _md antérieures. Celle que nous présentons ici est cependant d'une précision encore jamais atteinte grâce, d'une part, à l'excellente performance de KEKB et, d'autre part, à une approche originale qui permet de réduire considérablement la contamination de la mesure par des événements indésirés. Cette approche fut déjà mise à profit par d'autres expériences, dans des conditions quelque peu différentes de celles de Belle. La méthode utilisée consiste à reconstruire partiellement l'un des mésons dans le canal ___D*(D0_)l_l, en n'utilisant que les informations relatives au lepton l et au pion _. L'information concernant l'autre méson de la paire Banti-B initiale n'est tirée que d'un seul lepton de haute énergie. Ainsi, l'échantillon à disposition ne souffre pas de grandes réductions dues à une reconstruction complète, tandis que la contamination due aux mésons B chargés, produits par KEKB en quantité égale aux B0, est fortement diminuée en comparaison d'une analyse inclusive. Nous obtenons finalement le résultat suivant : _md = 0.513±0.006±0.008 ps^-1, la première erreur étant l'erreur statistique et la deuxième, l'erreur systématique.<br/><br/>The Belle experiment is located in the KEK research centre (Japan) and is primarily devoted to the study of CP violation in the B meson sector. Belle is placed on the KEKB collider, one of the two currently running "B-meson factories", which produce Banti-B pairs. KEKB has created more than 150 million pairs in total, a world record for this kind of colliders. This large sample allows very precise measurements in the physics of beauty mesons. The present analysis falls within the framework of these precise measurements. One of the most remarkable phenomena in high-energy physics is the ability of weak interactions to couple a neutral meson to its anti-meson. In this work, we study the coupling of neutral B with neutral anti-B meson, which induces an oscillation of frequency _md we can measure accurately. Besides the interest of this phenomenon itself, this measurement plays an important role in the quest for the origin of CP violation. The standard model of electro-weak interactions does not include CP violation in a fully satisfactory way. The search for yet unexplained physical phenomena is, therefore, the main motivation of the Belle collaboration. Many measurements of _md have previously been performed. The present work, however, leads to a precision on _md that was never reached before. This is the result of the excellent performance of KEKB, and of an original approach that allows to considerably reduce background contamination of pertinent events. This approach was already successfully used by other collaborations, in slightly different conditions as here. The method we employed consists in the partial reconstruction of one of the B mesons through the decay channel ___D*(D0_)l_l, where only the information on the lepton l and the pion _ are used. The information on the other B meson of the initial Banti-B pair is extracted from a single high-energy lepton. The available sample of Banti-B pairs thus does not suffer from large reductions due to complete reconstruction, nor does it suffer of high charged B meson background, as in inclusive analyses. We finally obtain the following result: _md = 0.513±0.006±0.008 ps^-1, where the first error is statistical, and the second, systematical.<br/><br/>De quoi la matière est-elle constituée ? Comment tient-elle ensemble ? Ce sont là les questions auxquelles la recherche en physique des hautes énergies tente de répondre. Cette recherche est conduite à deux niveaux en constante interaction. D?une part, des modèles théoriques sont élaborés pour tenter de comprendre et de décrire les observations. Ces dernières, d?autre part, sont réalisées au moyen de collisions à haute énergie de particules élémentaires. C?est ainsi que l?on a pu mettre en évidence l?existence de quatre forces fondamentales et de 24 constituants élémentaires, classés en « quarks » et « leptons ». Il s?agit là de l?une des plus belles réussites du modèle en usage aujourd?hui, appelé « Modèle Standard ». Il est une observation fondamentale que le Modèle Standard peine cependant à expliquer, c?est la disparition quasi complète de l?anti-matière (le « négatif » de la matière). Au niveau fondamental, cela doit correspondre à une asymétrie entre particules (constituants de la matière) et antiparticules (constituants de l?anti-matière). On l?appelle l?asymétrie (ou violation) CP. Bien qu?incluse dans le Modèle Standard, cette asymétrie n?est que partiellement prise en compte, semble-t-il. En outre, son origine est inconnue. D?intenses recherches sont donc aujourd?hui entreprises pour mettre en lumière cette asymétrie. L?expérience Belle, au Japon, en est une des pionnières. Belle étudie en effet les phénomènes physiques liés à une famille de particules appelées les « mésons B », dont on sait qu?elles sont liées de près à l?asymétrie CP. C?est dans le cadre de cette recherche que se place cette thèse. Nous avons étudié une propriété remarquable du méson B neutre : l?oscillation de ce méson avec son anti-méson. Cette particule est de se désintégrer pour donner l?antiparticule associée. Il est clair que cette oscillation est rattachée à l?asymétrie CP. Nous avons ici déterminé avec une précision encore inégalée la fréquence de cette oscillation. La méthode utilisée consiste à caractériser une paire de mésons B à l?aide de leur désintégration comprenant un lepton chacun. Une plus grande précision est obtenue en recherchant également une particule appelée le pion, et qui provient de la désintégration d?un des mésons. Outre l?intérêt de ce phénomène oscillatoire en lui-même, cette mesure permet d?affiner, directement ou indirectement, le Modèle Standard. Elle pourra aussi, à terme, aider à élucider le mystère de l?asymétrie entre matière et anti-matière.
Resumo:
The aim of the present study was to identify Candida albicans transcription factors (TFs) involved in virulence. Although mice are considered the gold-standard model to study fungal virulence, mini-host infection models have been increasingly used. Here, barcoded TF mutants were first screened in mice by pools of strains and fungal burdens (FBs) quantified in kidneys. Mutants of unannotated genes which generated a kidney FB significantly different from that of wild-type were selected and individually examined in Galleria mellonella. In addition, mutants that could not be detected in mice were also tested in G. mellonella. Only 25% of these mutants displayed matching phenotypes in both hosts, highlighting a significant discrepancy between the two models. To address the basis of this difference (pool or host effects), a set of 19 mutants tested in G. mellonella were also injected individually into mice. Matching FB phenotypes were observed in 50% of the cases, highlighting the bias due to host effects. In contrast, 33.4% concordance was observed between pool and single strain infections in mice, thereby highlighting the bias introduced by the "pool effect." After filtering the results obtained from the two infection models, mutants for MBF1 and ZCF6 were selected. Independent marker-free mutants were subsequently tested in both hosts to validate previous results. The MBF1 mutant showed impaired infection in both models, while the ZCF6 mutant was only significant in mice infections. The two mutants showed no obvious in vitro phenotypes compared with the wild-type, indicating that these genes might be specifically involved in in vivo adapt.
Resumo:
A method of objectively determining imaging performance for a mammography quality assurance programme for digital systems was developed. The method is based on the assessment of the visibility of a spherical microcalcification of 0.2 mm using a quasi-ideal observer model. It requires the assessment of the spatial resolution (modulation transfer function) and the noise power spectra of the systems. The contrast is measured using a 0.2-mm thick Al sheet and Polymethylmethacrylate (PMMA) blocks. The minimal image quality was defined as that giving a target contrast-to-noise ratio (CNR) of 5.4. Several evaluations of this objective method for evaluating image quality in mammography quality assurance programmes have been considered on computed radiography (CR) and digital radiography (DR) mammography systems. The measurement gives a threshold CNR necessary to reach the minimum standard image quality required with regards to the visibility of a 0.2-mm microcalcification. This method may replace the CDMAM image evaluation and simplify the threshold contrast visibility test used in mammography quality.
Resumo:
BACKGROUND/AIMS: The present report examines a new pig model for progressive induction of high-grade stenosis, for the study of chronic myocardial ischemia and the dynamics of collateral vessel growth. METHODS: Thirty-nine Landrace pigs were instrumented with a novel experimental stent (GVD stent) in the left anterior descending coronary artery. Eight animals underwent transthoracic echocardiography at rest and under low-dose dobutamine. Seven animals were examined by nuclear PET and SPECT analysis. Epi-, mid- and endocardial fibrosis and the numbers of arterial vessels were examined by histology. RESULTS: Functional analysis showed a significant decrease in global left ventricular ejection fraction (24.5 +/- 1.6%) 3 weeks after implantation. There was a trend to increased left ventricular ejection fraction after low-dose dobutamine stress (36.0 +/- 6.6%) and a significant improvement of the impaired regional anterior wall motion. PET and SPECT imaging documented chronic hibernation. Myocardial fibrosis increased significantly in the ischemic area with a gradient from epi- to endocardial. The number of arterial vessels in the ischemic area increased and coronary angiography showed abundant collateral vessels of Rentrop class 1. CONCLUSION: The presented experimental model mimics the clinical situation of chronic myocardial ischemia secondary to 1-vessel coronary disease.
Resumo:
The available virus-like particle (VLP)-based prophylactic vaccines against specific human papillomavirus (HPV) types afford close to 100% protection against the type-associated lesions and disease. Based on papillomavirus animal models, it is likely that protection against genital lesions in humans is mediated by HPV type-restricted neutralizing antibodies that transudate or exudate at the sites of genital infection. However, a correlate of protection was not established in the clinical trials because few disease cases occurred, and true incident infection could not be reliably distinguished from the emergence or reactivation of prevalent infection. In addition, the current assays for measuring vaccine-induced antibodies, even the gold standard HPV pseudovirion (PsV) in vitro neutralization assay, may not be sensitive enough to measure the minimum level of antibodies needed for protection. Here, we characterize the recently developed model of genital challenge with HPV PsV and determine the minimal amounts of VLP-induced neutralizing antibodies that can afford protection from genital infection in vivo after transfer into recipient mice. Our data show that serum antibody levels >100-fold lower than those detectable by in vitro PsV neutralization assays are sufficient to confer protection against an HPV PsV genital infection in this model. The results clearly demonstrate that, remarkably, the in vivo assay is substantially more sensitive than in vitro PsV neutralization and thus may be better suited for studies to establish correlates of protection.