330 resultados para Acyclic stereocontrol
Resumo:
The distribution of acyclic and cyclic biphytanediols, the putative breakdown products of glycerol dialkyl glycerol tetraethers (GDGTs), was investigated for recent marine sediments from Nankai Trough, offshore Kii Peninsula. The most abundant diol is tricyclic biphytanediol, whose relative abundance is in the range 32-46%. Its carbon skeleton, with two cyclopentane rings and one cyclohexane ring, is the same as would be expected via a crenarchaeol origin. Based on the structure of crenarchaeol, the tricyclic biphytanediol is considered to be derived not only from crenarchaeol but also from other unknown sources. The ring distributions of the biphytanediols are different from those of the biphytanes obtained from intact polar lipids by way of chemical treatment, suggesting that biphytanediols are not solely the diagenetic products of in situ GDGTs.
Resumo:
Glycerol dibiphytanyl glycerol tetraether (GDGT) lipids are part of the cellular membranes of Thaumarchaeota, an archaeal phylum composed of aerobic ammonia oxidizers, and are used in the paleotemperature proxy TEX86. GDGTs in live cells possess polar head groups and are called intact polar lipids (IPL-GDGTs). Their transformation to core lipids (CL) by cleavage of the head group was assumed to proceed rapidly after cell death but it has been suggested that some of these IPL-GDGTs can, just like the CL-GDGTs, be preserved over geological timescales. Here, we examined IPL-GDGTs in deeply buried (0.2-186 mbsf, ~2.5 Myr) sediments from the Peru Margin. Direct measurements of the most abundant IPL-GDGT, IPL-crenarchaeol, specific for Thaumarchaeota, revealed depth profiles which differed per head group. Shallow sediments (<1 mbsf) contained IPL-crenarchaeol with both glycosidic- and phosphate headgroups, as also observed in thaumarchaeal enrichment cultures, marine suspended particulate matter and marine surface sediments. However, hexose, phosphohexose-crenarchaeol is not detected anymore below 6 mbsf (~7 kyr), suggesting a high lability. In contrast, IPL-crenarchaeol with glycosidic head groups is preserved over time scales of Myr. This agrees with previous analyses of deeply buried (>1 m) marine sediments, which only reported glycosidic and no phosphate-containing IPL-GDGTs. TEX86 values of CL-GDGTs did not markedly change with depth, and the TEX86 of IPL-derived GDGTs decreased only when the proportions of monohexose- to dihexose-GDGTs changed, likely due to the enhanced preservation of the monohexose GDGTs. Our results support the hypothesis that in situ GDGT production and differential IPL degradation in sediments is not substantially affecting TEX86 paleotemperature estimations based on CL GDGTs and indicate that likely only a small amount of IPL-GDGTs present in deeply buried sediments is part of cell membranes of active Archaea. The amount of archaeal biomass in the deep biosphere based on these IPLs may have been substantially overestimated.
Resumo:
It has been proposed that North Pacific sea surface temperature (SST) evolution was intimately linked to North Atlantic climate oscillations during the last glacial-interglacial transition. However, during the early deglaciation and the Last Glacial Maximum, the SST development in the subarctic northwest Pacific and the Bering Sea is poorly constrained as most existing deglacial SST records are based on alkenone paleothermometry, which is limited prior to 15 ka B.P. in the subarctic North Pacific realm. By applying the TEXL86 temperature proxy we obtain glacial-Holocene-SST records for the marginal northwest Pacific and the Western Bering Sea. Our TEXL86-based records and existing alkenone data suggest that during the past 15.5 ka, SSTs in the northwest Pacific and the Western Bering Sea closely followed millennial-scale climate fluctuations known from Greenland ice cores, indicating rapid atmospheric teleconnections with abrupt climate changes in the North Atlantic. Our SST reconstructions indicate that in the Western Bering Sea SSTs drop significantly during Heinrich Stadial 1 (HS1), similar to the known North Atlantic climate history. In contrast, progressively rising SST in the northwest Pacific is different to the North Atlantic climate development during HS1. Similarities between the northwest Pacific SST and climate records from the Gulf of Alaska point to a stronger influence of Alaskan Stream waters connecting the eastern and western basin of the North Pacific during this time. During the Holocene, dissimilar climate trends point to reduced influence of the Alaskan Stream in the northwest Pacific.
Resumo:
In the reconstruction of sea surface temperature (SST) from sedimentary archives, secondary sources, lateral transport and selective preservation are considered to be mainly negligible in terms of influencing the primary signal. This is also true for the archaeal glycerol dialkyl glycerol tetraethers (GDGTs) that form the basis for the TEX86 SST proxy. Our samples represent four years variability on a transect off Cape Blanc (NW Africa). We studied the subsurface production, vertical and lateral transport of intact polar lipids and core GDGTs in the water column at high vertical resolution on the basis of suspended particulate matter (SPM) samples from the photic zone, the subsurface oxygen minimum zone (OMZ), nepheloid layers (NL) and the water column between these. Furthermore we compared the water column SPM GDGT composition with that in underlying surface sediments. This is the first study that reports TEX86 values from the precursor intact polar lipids (IPLs) associated with specific head groups (IPL -specific TEX86). We show a clear deviation from the sea surface GDGT composition in the OMZ between 300 and 600 m. Since neither lateral transport nor selective degradation provides a satisfactory explanation for the observed TEX-derived temperature profiles with a bias towards higher temperatures for both core- and IPL -specific TEX86 values, we suggest that subsurface in situ production of archaea with a distinct relationship between lipid biosynthesis and temperature is the responsible mechanism. However, in the NW-African upwelling system the GDGT contribution of the OMZ to the surface sediments does not seem to affect the sedimentary TEX86 as it shows no bias and still reflects the signal of the surface waters between 0 and 60 m.
Resumo:
In this study, we obtained concentrations and abundance ratios of long-chain alkenones and glycerol dialkyl glycerol tetraethers (GDGTs) in a one-year time-series of sinking particles collected with a sediment trap moored from December 2001 to November 2002 at 2200 m water depth south of Java in the eastern Indian Ocean. We investigate the seasonality of alkenone and GDGT fluxes as well as the potential habitat depth of the Thaumarchaeota producing the GDGTs entrained in sinking particles. The alkenone flux shows a pronounced seasonality and ranges from 1 µg m-**2 d**-1 to 35 µg m**-2 d**-1. The highest alkenone flux is observed in late September during the Southeast monsoon, coincident with high total organic carbon fluxes as well as high net primary productivity. Flux-weighted mean temperature for the high flux period using the alkenone-based sea-surface temperature (SST) index UK'37 is 26.7°C, which is similar to satellite-derived Southeast (SE) monsoon SST (26.4°C). The GDGT flux displays a weaker seasonality than that of the alkenones. It is elevated during the SE monsoon period compared to the Northwest (NW) monsoon and intermonsoon periods (approximately 2.5 times), which is probably related to seasonal variation of the abundance of Thaumarchaeota, or to enhanced export of GDGTs by aggregation with sinking phytoplankton detritus. Flux-weighted mean temperature inferred from the GDGT-based TEXH86 index is 26.2°C, which is 1.8 °C lower than mean annual (ma) SST but similar to SE monsoon SST. As the time series of TEXH86 temperature estimates, however, does not record a strong seasonal amplitude, we infer that TEXH86 reflects ma upper thermocline temperature at approximately 50 m water depth.
Resumo:
Increased temperature and precipitation in Arctic regions have led to deeper thawing and structural instability in permafrost soil. The resulting localized disturbances, referred to as active layer detachments (ALDs), may transport organic matter (OM) to more biogeochemically active zones. To examine this further, solid state cross polarization magic angle spinning 13C nuclear magnetic resonance (CPMAS NMR) and biomarker analysis were used to evaluate potential shifts in riverine sediment OM composition due to nearby ALDs within the Cape Bounty Arctic Watershed Observatory, Nunavut, Canada. In sedimentary OM near ALDs, NMR analysis revealed signals indicative of unaltered plant-derived material, likely derived from permafrost. Long chain acyclic aliphatic lipids, steroids, cutin, suberin and lignin occurred in the sediments, consistent with a dominance of plant-derived compounds, some of which may have originated from permafrost-derived OM released by ALDs. OM degradation proxies for sediments near ALDs revealed less alteration in acyclic aliphatic lipids, while constituents such as steroids, cutin, suberin and lignin were found at a relatively advanced stage of degradation. Phospholipid fatty acid analysis indicated that microbial activity was higher near ALDs than downstream but microbial substrate limitation was prevalent within disturbed regions. Our study suggests that, as these systems recover from disturbance, ALDs likely provide permafrost-derived OM to sedimentary environments. This source of OM, which is enriched in labile OM, may alter biogeochemical patterns and enhance microbial respiration within these ecosystems.
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
The intrinsic gas-phase reactivity of cyclic N-acyliminium ions in Mannich-type reactions with the parent enol silane, vinyloxytrimethylsilane, has been investigated by double- and triple-stage pentaquadrupole mass spectrometric experiments. Remarkably distinct reactivities are observed for cyclic N-acyliminium ions bearing either endocyclic or exocyclic carbonyl groups. NH-Acyliminium ions with endocyclic carbonyl groups locked in s-trans forms participate in a novel tandem N-acyliminium ion reaction: the nascent adduct formed by simple addition is unstable and rearranges by intramolecular trimethylsilyl cation shift to the ring nitrogen, and an acetaldehyde enol molecule is eliminated. An NSi(CH3)3-acyliminium ion is formed, and this intermediate ion reacts with a second molecule of vinyloxytrimethylsilane by simple addition to form a stable acyclic adduct. N-Acyl and N,N-diacyliminium ions with endocyclic carbonyl groups, for which the s-cis conformation is favored, react distinctively by mono polar [4+ + 2] cycloaddition yielding stable, ressonance-stabilized cycloadducts. Product ions were isolated via mass-selection and structurally characterized by triple-stage mass spectrometric experiments. B3LYP/6-311G(d,p) calculations corroborate the proposed reaction mechanisms.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.
Resumo:
Ce mémoire décrit les travaux qui ont été réalisés sur la synthèse de l’hodgsonox, un sesquiterpène tricyclique comportant un éther diallylique dans un cycle tétrahydropyranique. Les approches envisagées sont la formation du cycle à cinq puis la formation du tétrahydropyrane et une autre plus convergente qui implique la synthèse des deux cycles en une seule étape. La première partie du mémoire discute de la synthèse d’un précurseur acyclique du cycle à cinq membres, afin de réaliser une réaction de métathèse de fermeture de cycle. Toutefois, les essais n’ont pas été concluants et cette voie a été abandonnée. Dans la deuxième partie, une nouvelle approche impliquant la synthèse d’un bicycle par une réaction de Pauson-Khand a été étudiée. Le précurseur de la réaction de Pauson- Khand a été préparé en 9 étapes (30% de rendement global) à partir du diéthyle tartrate. Le produit de cyclisation a été également obtenu mais il n’a pas été possible par la suite d’introduire le groupement isopropyle. Dans la dernière partie de ce mémoire, les travaux de Lise Brethous sur la synthèse de l’hodgsonox ont été repris. Celle-ci avait montré que le cycle à 5 membres pouvait être obtenu à partir de l’a-acétyl g-butyrolactone et que la formation du bicycle pouvait être réalisée par une réaction catalytique d’insertion d’un composé diazoïque dans un lien O-H. Certaines de ces étapes ont été optimisées et différents tests ont été effectués pour réaliser les dernières étapes de la synthèse de l’hodgosonox, mais sans succès.
Resumo:
Ce mémoire décrit les travaux qui ont été réalisés sur la synthèse de l’hodgsonox, un sesquiterpène tricyclique comportant un éther diallylique dans un cycle tétrahydropyranique. Les approches envisagées sont la formation du cycle à cinq puis la formation du tétrahydropyrane et une autre plus convergente qui implique la synthèse des deux cycles en une seule étape. La première partie du mémoire discute de la synthèse d’un précurseur acyclique du cycle à cinq membres, afin de réaliser une réaction de métathèse de fermeture de cycle. Toutefois, les essais n’ont pas été concluants et cette voie a été abandonnée. Dans la deuxième partie, une nouvelle approche impliquant la synthèse d’un bicycle par une réaction de Pauson-Khand a été étudiée. Le précurseur de la réaction de Pauson- Khand a été préparé en 9 étapes (30% de rendement global) à partir du diéthyle tartrate. Le produit de cyclisation a été également obtenu mais il n’a pas été possible par la suite d’introduire le groupement isopropyle. Dans la dernière partie de ce mémoire, les travaux de Lise Brethous sur la synthèse de l’hodgsonox ont été repris. Celle-ci avait montré que le cycle à 5 membres pouvait être obtenu à partir de l’a-acétyl g-butyrolactone et que la formation du bicycle pouvait être réalisée par une réaction catalytique d’insertion d’un composé diazoïque dans un lien O-H. Certaines de ces étapes ont été optimisées et différents tests ont été effectués pour réaliser les dernières étapes de la synthèse de l’hodgosonox, mais sans succès.
Resumo:
Normalmente el desarrollo de un país se ha explicado desde una perspectiva tradicional en términos de su crecimiento económico, teniendo en cuenta indicadores macroeconómicos como el PIB, la inflación y el desempleo. Poca atención se le ha puesto a la importancia que para el desarrollo de un país representan el capital humano y el proceso de liderazgo. Debido a lo anterior, mediante este estudio de caso, se pretende entender el éxito de la estrategia de crecimiento por exportaciones de Japón entre los años 1960-1980 teniendo en cuenta estos aspectos. Así, se busca sustentar que la incorporación de un tipo de liderazgo transformacional- transaccional y los elementos propios de su cultura como el confucianismo y el budismo, le imprimieron una perspectiva no economicista al éxito del modelo de desarrollo como parte de la triada empresa-estado-universidad. Lo anterior se realizará partiendo de un análisis cualitativo y con un enfoque en la economía política internacional y en el liderazgo. Este último estudiado desde las disciplinas de la administración, la sociología y la psicología