10 resultados para Expectations hypothesis of term struscture of interest rates

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We start in Chapter 2 to investigate linear matrix-valued SDEs and the It-stochastic Magnus expansion. The It-stochastic Magnus expansion provides an efficient numerical scheme to solve matrix-valued SDEs. We show convergence of the expansion up to a stopping time and provide an asymptotic estimate of the cumulative distribution function of . Moreover, we show how to apply it to solve SPDEs with one and two spatial dimensions by combining it with the method of lines with high accuracy. We will see that the Magnus expansion allows us to use GPU techniques leading to major performance improvements compared to a standard Euler-Maruyama scheme. In Chapter 3, we study a short-rate model in a Cox-Ingersoll-Ross (CIR) framework for negative interest rates. We define the short rate as the difference of two independent CIR processes and add a deterministic shift to guarantee a perfect fit to the market term structure. We show how to use the Gram-Charlier expansion to efficiently calibrate the model to the market swaption surface and price Bermudan swaptions with good accuracy. We are taking two different perspectives for rating transition modelling. In Section 4.4, we study inhomogeneous continuous-time Markov chains (ICTMC) as a candidate for a rating model with deterministic rating transitions. We extend this model by taking a Lie group perspective in Section 4.5, to allow for stochastic rating transitions. In both cases, we will compare the most popular choices for a change of measure technique and show how to efficiently calibrate both models to the available historical rating data and market default probabilities. At the very end, we apply the techniques shown in this thesis to minimize the collateral-inclusive Credit/ Debit Valuation Adjustments under the constraint of small collateral postings by using a collateral account dependent on rating trigger.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is based on the integration of traditional and innovative approaches aimed at improving the normal faults seimogenic identification and characterization, focusing mainly on slip-rate estimate as a measure of the fault activity. The LAquila Mw 6.3 April 6, 2009 earthquake causative fault, namely the Paganica - San Demetrio fault system (PSDFS), was used as a test site. We developed a multidisciplinary and scalebased strategy consisting of paleoseismological investigations, detailed geomorphological and geological field studies, as well as shallow geophysical imaging and an innovative application of physical properties measurements. We produced a detailed geomorphological and geological map of the PSDFS, defining its tectonic style, arrangement, kinematics, extent, geometry and internal complexities. The PSDFS is a 19 km-long tectonic structure, characterized by a complex structural setting and arranged in two main sectors: the Paganica sector to the NW, characterized by a narrow deformation zone, and the San Demetrio sector to SE, where the strain is accommodated by several tectonic structures, exhuming and dissecting a wide Quaternary basin, suggesting the occurrence of strain migration through time. The integration of all the fault displacement data and age constraints (radiocarbon dating, optically stimulated luminescence (OSL) and tephrochronology) helped in calculating an average Quaternary slip-rate representative for the PSDFS of 0.27 - 0.48 mm/yr. On the basis of its length (ca. 20 km) and slip per event (up to 0.8 m) we also estimated a max expected Magnitude of 6.3-6.8 for this fault. All these topics have a significant implication in terms of surface faulting hazard in the area and may contribute also to the understanding of the PSDFS seismic behavior and of the local seismic hazard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation is about collective action issues in common property resources. Its focus is the threshold hypothesis, which posits the existence of a threshold in group size that drives the process of institutional change. This hypothesis is tested using a six-century dataset concerning the management of the commons by hundreds of communities in the Italian Alps. The analysis seeks to determine the group size threshold and the institutional changes that occur when groups cross this threshold. There are five main findings. First, the number of individuals in villages remained stable for six centuries, despite the population in the region tripling in the same period. Second, the longitudinal analysis of face-to-face assemblies and community size led to the empirical identification of a threshold size that triggered the transition from informal to more formal regimes to manage common property resources. Third, when groups increased in size, gradual organizational changes took place: large groups split into independent subgroups or structured interactions into multiple layers while maintaining a single formal organization. Fourth, resource heterogeneity seemed to have had no significant impact on various institutional characteristics. Fifth, social heterogeneity showed statistically significant impacts, especially on institutional complexity, consensus, and the relative importance of governance rules versus resource management rules. Overall, the empirical evidence from this research supports the threshold hypothesis. These findings shed light on the rationale of institutional change in common property regimes, and clarify the mechanisms of collective action in traditional societies. Further research may generalize these conclusions to other domains of collective action and to present-day applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation consists of four papers that aim at providing new contributions in the field of macroeconomics, monetary policy and financial stability. The first paper proposes a new Dynamic Stochastic General Equilibrium (DSGE) model with credit frictions and a banking sector to study the pro-cyclicality of credit and the role of different prudential regulatory frameworks in affecting business cycle fluctuations and in restoring macroeconomic and financial stability. The second paper develops a simple DSGE model capable of evaluating the effects of large purchases of treasuries by central banks. This theoretical framework is employed to evaluate the impact on yields and the macroeconomy of large purchases of medium- and long-term government bonds recently implemented in the US and UK. The third paper studies the effects of ECB communications about unconventional monetary policy operations on the perceived sovereign risk of Italy over the last five years. The empirical results are derived from both an event-study analysis and a GARCH model, which uses Italian long-term bond futures to disentangle expected from unexpected policy actions. The fourth paper proposes a DSGE model with an endogenous term structure of interest rates, which is able to replicate the stylized facts regarding the yield curve and the term premium in the US over the period 1987:3-2011:3, without compromising its ability to match macro dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent adoption of IFRS 9 is a highly disruptive accounting reform, with significant impacts on how and when negative news (i.e., negative adjustments to reported earnings) are recognized on the financial statements. Using a unique dataset of two major banks operating in one European country we provide evidence of a tightening of the corporate loans pricing after the IFRS 9 adoption. Furthermore, by focusing on the post reform period, we show that the tightening is driven by the new staging classification. Higher risk premiums are associated to clients with previous underperforming exposures (stage 2) and higher probability of default. We also observe that the staging classification is not affecting climate risk premiums. Our results highlight that the lenders, as expected by the regulation, change their risk appetite by charging higher spreads to discourage loan origination for clients that became too risky and expensive under the new standard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a revision of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through G turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime < 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: Is it possible to model self-consistently the evolution of these sources together with that of the parent clusters? How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfvenic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfven waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semianalytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical G strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a revision of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z 0.05 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z 0.20.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z 0.05 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an averaged size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last geometrical MH RH correlation allows us to observationally overcome the limitation of the average size of Radio Halos. Thus in this Chapter, by making use of this geometrical correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR RH, PR MH, PR T, PR LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMMNewton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (<Gamma>~1.8), the high-energy cut-off (<Ec>~290 keV), and the relative amount of cold reflection (<R>~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMMNewton and XCfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(0.220.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined XC f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(3843) erg s^1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the XCfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMMNewton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aspartic protease BACE1 (-amyloid precursor protein cleaving enzyme, -secretase) is recognized as one of the most promising targets in the treatment of Alzheimer's disease (AD). The accumulation of -amyloid peptide (A) in the brain is a major factor in the pathogenesis of AD. A is formed by initial cleavage of -amyloid precursor protein (APP) by -secretase, therefore BACE1 inhibition represents one of the therapeutic approaches to control progression of AD, by preventing the abnormal generation of A. For this reason, in the last decade, many research efforts have focused at the identification of new BACE1 inhibitors as drug candidates. Generally, BACE1 inhibitors are grouped into two families: substrate-based inhibitors, designed as peptidomimetic inhibitors, and non-peptidomimetic ones. The research on non-peptidomimetic small molecules BACE1 inhibitors remains the most interesting approach, since these compounds hold an improved bioavailability after systemic administration, due to a good blood-brain barrier permeability in comparison to peptidomimetic inhibitors. Very recently, our research group discovered a new promising lead compound for the treatment of AD, named lipocrine, a hybrid derivative between lipoic acid and the AChE inhibitor (AChEI) tacrine, characterized by a tetrahydroacridinic moiety. Lipocrine is one of the first compounds able to inhibit the catalytic activity of AChE and AChE-induced amyloid- aggregation and to protect against reactive oxygen species. Due to this interesting profile, lipocrine was also evaluated for BACE1 inhibitory activity, resulting in a potent lead compound for BACE1 inhibition. Starting from this interesting profile, a series of tetrahydroacridine analogues were synthesised varying the chain length between the two fragments. Moreover, following the approach of combining in a single molecule two different pharmacophores, we designed and synthesised different compounds bearing the moieties of known AChEIs (rivastigmine and caproctamine) coupled with lipoic acid, since it was shown that dithiolane group is an important structural feature of lipocrine for the optimal inhibition of BACE1. All the tetrahydroacridines, rivastigmine and caproctamine-based compounds, were evaluated for BACE1 inhibitory activity in a FRET (fluorescence resonance energy transfer) enzymatic assay (test A). With the aim to enhancing the biological activity of the lead compound, we applied the molecular simplification approach to design and synthesize novel heterocyclic compounds related to lipocrine, in which the tetrahydroacridine moiety was replaced by 4-amino-quinoline or 4-amino-quinazoline rings. All the synthesized compounds were also evaluated in a modified FRET enzymatic assay (test B), changing the fluorescent substrate for enzymatic BACE1 cleavage. This test method guided deep structure-activity relationships for BACE1 inhibition on the most promising quinazoline-based derivatives. By varying the substituent on the 2-position of the quinazoline ring and by replacing the lipoic acid residue in lateral chain with different moieties (i.e. trans-ferulic acid, a known antioxidant molecule), a series of quinazoline derivatives were obtained. In order to confirm inhibitory activity of the most active compounds, they were evaluated with a third FRET assay (test C) which, surprisingly, did not confirm the previous good activity profiles. An evaluation study of kinetic parameters of the three assays revealed that method C is endowed with the best specificity and enzymatic efficiency. Biological evaluation of the modified 2,4-diamino-quinazoline derivatives measured through the method C, allow to obtain a new lead compound bearing the trans-ferulic acid residue coupled to 2,4-diamino-quinazoline core endowed with a good BACE1 inhibitory activity (IC50 = 0.8 mM). We reported on the variability of the results in the three different FRET assays that are known to have some disadvantages in term of interference rates that are strongly dependent on compound properties. The observed results variability could be also ascribed to different enzyme origin, varied substrate and different fluorescent groups. The inhibitors should be tested on a parallel screening in order to have a more reliable data prior to be tested into cellular assay. With this aim, preliminary cellular BACE1 inhibition assay carried out on lipocrine confirmed a good cellular activity profile (EC50 = 3.7 mM) strengthening the idea to find a small molecule non-peptidomimetic compound as BACE1 inhibitor. In conclusion, the present study allowed to identify a new lead compound endowed with BACE1 inhibitory activity in submicromolar range. Further lead optimization to the obtained derivative is needed in order to obtain a more potent and a selective BACE1 inhibitor based on 2,4-diamino-quinazoline scaffold. A side project related to the synthesis of novel enzymatic inhibitors of BACE1 in order to explore the pseudopeptidic transition-state isosteres chemistry was carried out during research stage at Universit de Montral (Canada) in Hanessian's group. The aim of this work has been the synthesis of the -aminocyclohexane carboxylic acid motif with stereochemically defined substitution to incorporating such a constrained core in potential BACE1 inhibitors. This fragment, endowed with reduced peptidic character, is not known in the context of peptidomimetic design. In particular, we envisioned an alternative route based on an organocatalytic asymmetric conjugate addition of nitroalkanes to cyclohexenone in presence of D-proline and trans-2,5-dimethylpiperazine. The enantioenriched obtained 3-(-nitroalkyl)-cyclohexanones were further functionalized to give the corresponding -nitroalkyl cyclohexane carboxylic acids. These intermediates were elaborated to the target structures 3-(-aminoalkyl)-1-cyclohexane carboxylic acids in a new readily accessible way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inflammatory Bowel Diseases (IBD) are intestinal chronic relapsing diseases which ethiopathogenesis remains uncertain. Several group have attempted to study the role of factors involved such as genetic susceptibility, environmental factors such as smoke, diet, sex, immunological factors as well as the microbioma. None of the treatments available satisfy several criteria at the same time such as safety, long-term remission, histopatological healing, and specificity. We used two different approaches for the development of new therapeutic treatment for Inflammatory Bowel Disease. The first is focused on the understanding of the potential role of functional food and nutraceuticals nutrients in the treatment of IBD. To do so, we investigated the role of Curcuma longa in the treatment of chemical induced colitis in mice model. Since Curcma Longa has been investigated for its antinflammatory role related to the TNF pathway as well investigators have reported few cases of patients with ulcerative colites treated with this herbs, we harbored the hypothesis of a role of Curcuma Longa in the treatment f IBD as well as we decided to assess its role in intestinal motility. The second part is based on an immunological approach to develop new drugs to induce suppression in Crohns disease or to induce mucosa immunity such as in colonrectal tumor. The main idea behind this approach is that we could manipulate relevant cell-cell interactions using synthetic peptides. We demonstrated the role of the unique interaction between molecules expressed on intestinal epithelial cells such as CD1d and CEACAM5 and on CD8+ T cells. In normal condition this interaction has a role for the expansion of the suppressor CD8+ T cells. Here, we characterized this interaction, we defined which are the epitope involved in the binding and we attempted to develop synthetic peptides from the N domain of CEACAM5 in order to manipulate it.