545 resultados para Acyclic Permutation
Resumo:
In the reconstruction of sea surface temperature (SST) from sedimentary archives, secondary sources, lateral transport and selective preservation are considered to be mainly negligible in terms of influencing the primary signal. This is also true for the archaeal glycerol dialkyl glycerol tetraethers (GDGTs) that form the basis for the TEX86 SST proxy. Our samples represent four years variability on a transect off Cape Blanc (NW Africa). We studied the subsurface production, vertical and lateral transport of intact polar lipids and core GDGTs in the water column at high vertical resolution on the basis of suspended particulate matter (SPM) samples from the photic zone, the subsurface oxygen minimum zone (OMZ), nepheloid layers (NL) and the water column between these. Furthermore we compared the water column SPM GDGT composition with that in underlying surface sediments. This is the first study that reports TEX86 values from the precursor intact polar lipids (IPLs) associated with specific head groups (IPL -specific TEX86). We show a clear deviation from the sea surface GDGT composition in the OMZ between 300 and 600 m. Since neither lateral transport nor selective degradation provides a satisfactory explanation for the observed TEX-derived temperature profiles with a bias towards higher temperatures for both core- and IPL -specific TEX86 values, we suggest that subsurface in situ production of archaea with a distinct relationship between lipid biosynthesis and temperature is the responsible mechanism. However, in the NW-African upwelling system the GDGT contribution of the OMZ to the surface sediments does not seem to affect the sedimentary TEX86 as it shows no bias and still reflects the signal of the surface waters between 0 and 60 m.
Resumo:
In this study, we obtained concentrations and abundance ratios of long-chain alkenones and glycerol dialkyl glycerol tetraethers (GDGTs) in a one-year time-series of sinking particles collected with a sediment trap moored from December 2001 to November 2002 at 2200 m water depth south of Java in the eastern Indian Ocean. We investigate the seasonality of alkenone and GDGT fluxes as well as the potential habitat depth of the Thaumarchaeota producing the GDGTs entrained in sinking particles. The alkenone flux shows a pronounced seasonality and ranges from 1 µg m-**2 d**-1 to 35 µg m**-2 d**-1. The highest alkenone flux is observed in late September during the Southeast monsoon, coincident with high total organic carbon fluxes as well as high net primary productivity. Flux-weighted mean temperature for the high flux period using the alkenone-based sea-surface temperature (SST) index UK'37 is 26.7°C, which is similar to satellite-derived Southeast (SE) monsoon SST (26.4°C). The GDGT flux displays a weaker seasonality than that of the alkenones. It is elevated during the SE monsoon period compared to the Northwest (NW) monsoon and intermonsoon periods (approximately 2.5 times), which is probably related to seasonal variation of the abundance of Thaumarchaeota, or to enhanced export of GDGTs by aggregation with sinking phytoplankton detritus. Flux-weighted mean temperature inferred from the GDGT-based TEXH86 index is 26.2°C, which is 1.8 °C lower than mean annual (ma) SST but similar to SE monsoon SST. As the time series of TEXH86 temperature estimates, however, does not record a strong seasonal amplitude, we infer that TEXH86 reflects ma upper thermocline temperature at approximately 50 m water depth.
Resumo:
Increased temperature and precipitation in Arctic regions have led to deeper thawing and structural instability in permafrost soil. The resulting localized disturbances, referred to as active layer detachments (ALDs), may transport organic matter (OM) to more biogeochemically active zones. To examine this further, solid state cross polarization magic angle spinning 13C nuclear magnetic resonance (CPMAS NMR) and biomarker analysis were used to evaluate potential shifts in riverine sediment OM composition due to nearby ALDs within the Cape Bounty Arctic Watershed Observatory, Nunavut, Canada. In sedimentary OM near ALDs, NMR analysis revealed signals indicative of unaltered plant-derived material, likely derived from permafrost. Long chain acyclic aliphatic lipids, steroids, cutin, suberin and lignin occurred in the sediments, consistent with a dominance of plant-derived compounds, some of which may have originated from permafrost-derived OM released by ALDs. OM degradation proxies for sediments near ALDs revealed less alteration in acyclic aliphatic lipids, while constituents such as steroids, cutin, suberin and lignin were found at a relatively advanced stage of degradation. Phospholipid fatty acid analysis indicated that microbial activity was higher near ALDs than downstream but microbial substrate limitation was prevalent within disturbed regions. Our study suggests that, as these systems recover from disturbance, ALDs likely provide permafrost-derived OM to sedimentary environments. This source of OM, which is enriched in labile OM, may alter biogeochemical patterns and enhance microbial respiration within these ecosystems.
Resumo:
This paper formulates a linear kernel support vector machine (SVM) as a regularized least-squares (RLS) problem. By defining a set of indicator variables of the errors, the solution to the RLS problem is represented as an equation that relates the error vector to the indicator variables. Through partitioning the training set, the SVM weights and bias are expressed analytically using the support vectors. It is also shown how this approach naturally extends to Sums with nonlinear kernels whilst avoiding the need to make use of Lagrange multipliers and duality theory. A fast iterative solution algorithm based on Cholesky decomposition with permutation of the support vectors is suggested as a solution method. The properties of our SVM formulation are analyzed and compared with standard SVMs using a simple example that can be illustrated graphically. The correctness and behavior of our solution (merely derived in the primal context of RLS) is demonstrated using a set of public benchmarking problems for both linear and nonlinear SVMs.
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
The intrinsic gas-phase reactivity of cyclic N-acyliminium ions in Mannich-type reactions with the parent enol silane, vinyloxytrimethylsilane, has been investigated by double- and triple-stage pentaquadrupole mass spectrometric experiments. Remarkably distinct reactivities are observed for cyclic N-acyliminium ions bearing either endocyclic or exocyclic carbonyl groups. NH-Acyliminium ions with endocyclic carbonyl groups locked in s-trans forms participate in a novel tandem N-acyliminium ion reaction: the nascent adduct formed by simple addition is unstable and rearranges by intramolecular trimethylsilyl cation shift to the ring nitrogen, and an acetaldehyde enol molecule is eliminated. An NSi(CH3)3-acyliminium ion is formed, and this intermediate ion reacts with a second molecule of vinyloxytrimethylsilane by simple addition to form a stable acyclic adduct. N-Acyl and N,N-diacyliminium ions with endocyclic carbonyl groups, for which the s-cis conformation is favored, react distinctively by mono polar [4+ + 2] cycloaddition yielding stable, ressonance-stabilized cycloadducts. Product ions were isolated via mass-selection and structurally characterized by triple-stage mass spectrometric experiments. B3LYP/6-311G(d,p) calculations corroborate the proposed reaction mechanisms.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.
Resumo:
The cultivated strawberry (Fragaria x ananassa) is the berry fruit most consumed worldwide and is well-known for its delicate flavour and nutritional properties. However, fruit quality attributes have been lost or reduced after years of traditional breeding focusing mainly on agronomical traits. To face the obstacles encountered in the improvement of cultivated crops, new technological tools, such as genomics and high throughput metabolomics, are becoming essential for the identification of genetic factors responsible of organoleptic and nutritive traits. Integration of “omics” data will allow a better understanding of the molecular and genetic mechanisms underlying the accumulation of metabolites involved in the flavour and nutritional value of the fruit. To identify genetic components affecting/controlling? fruit metabolic composition, here we present a quantitative trait loci (QTL) analysis using a 95 F1 segregating population derived from genotypes ‘1392’, selected for its superior flavour, and ‘232’ selected based in high yield (Zorrilla-Fontanesi et al., 2011; Zorrilla-Fontanesi et al., 2012). Metabolite profiling was performed on red stage strawberry fruits using gas chromatography hyphenated to time-of-flight mass spectrometry, which is a rapid and highly sensitive approach, allowing a good coverage of the central pathways of primary metabolism. Around 50 primary metabolites, including sugars, sugars derivatives, amino and organic acids, were detected and quantified after analysis in each individual of the population. QTL mapping was performed on the ‘232’ x ‘1392’ population separately over two successive years, based on the integrated linkage map (Sánchez-Sevilla et al., 2015). First, significant associations between metabolite content and molecular markers were identified by the non-parametric test of Kruskal-Wallis. Then, interval mapping (IM), as well as the multiple QTL method (MQM) allowed the identification of QTLs in octoploid strawberry. A permutation test established LOD thresholds for each metabolite and year. A total of 132 QTLs were detected in all the linkage groups over the two years for 42 metabolites out of 50. Among them, 4 (9.8%) QTLs for sugars, 9 (25%) for acids and 7 (12.7%) for amino acids were stable and detected in the two successive years. We are now studying the QTLs regions in order to find candidate genes to explain differences in metabolite content in the different individuals of the population, and we expect to identify associations between genes and metabolites which will help us to understand their role in quality traits of strawberry fruit.
Resumo:
In design and manufacturing, mesh segmentation is required for FACE construction in boundary representation (BRep), which in turn is central for featurebased design, machining, parametric CAD and reverse engineering, among others -- Although mesh segmentation is dictated by geometry and topology, this article focuses on the topological aspect (graph spectrum), as we consider that this tool has not been fully exploited -- We preprocess the mesh to obtain a edgelength homogeneous triangle set and its Graph Laplacian is calculated -- We then produce a monotonically increasing permutation of the Fiedler vector (2nd eigenvector of Graph Laplacian) for encoding the connectivity among part feature submeshes -- Within the mutated vector, discontinuities larger than a threshold (interactively set by a human) determine the partition of the original mesh -- We present tests of our method on large complex meshes, which show results which mostly adjust to BRep FACE partition -- The achieved segmentations properly locate most manufacturing features, although it requires human interaction to avoid over segmentation -- Future work includes an iterative application of this algorithm to progressively sever features of the mesh left from previous submesh removals
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.
Resumo:
La distance de Kendall-τ compte le nombre de paires en désaccord entre deux permuta- tions. La distance d’une permutation à un ensemble est simplement la somme des dis- tances entre cette permutation et les permutations de l’ensemble. À partir d’un ensemble donné de permutations, notre but est de trouver la permutation, appelée médiane, qui minimise cette distance à l’ensemble. Le problème de la médiane de permutations sous la distance de Kendall-τ, trouve son application en bio-informatique, en science politique, en télécommunication et en optimisation. Ce problème d’apparence simple est prouvé difficile à résoudre. Dans ce mémoire, nous présentons plusieurs approches pour résoudre le problème, pour trouver une bonne solution approximative, pour le séparer en classes caractéristiques, pour mieux com- prendre sa compléxité, pour réduire l’espace de recheche et pour accélérer les calculs. Nous présentons aussi, vers la fin du mémoire, une généralisation de ce problème et nous l’étudions avec ces mêmes approches. La majorité du travail de ce mémoire se situe dans les trois articles qui le composent et est complémenté par deux chapitres servant à les lier.
Resumo:
Ce mémoire décrit les travaux qui ont été réalisés sur la synthèse de l’hodgsonox, un sesquiterpène tricyclique comportant un éther diallylique dans un cycle tétrahydropyranique. Les approches envisagées sont la formation du cycle à cinq puis la formation du tétrahydropyrane et une autre plus convergente qui implique la synthèse des deux cycles en une seule étape. La première partie du mémoire discute de la synthèse d’un précurseur acyclique du cycle à cinq membres, afin de réaliser une réaction de métathèse de fermeture de cycle. Toutefois, les essais n’ont pas été concluants et cette voie a été abandonnée. Dans la deuxième partie, une nouvelle approche impliquant la synthèse d’un bicycle par une réaction de Pauson-Khand a été étudiée. Le précurseur de la réaction de Pauson- Khand a été préparé en 9 étapes (30% de rendement global) à partir du diéthyle tartrate. Le produit de cyclisation a été également obtenu mais il n’a pas été possible par la suite d’introduire le groupement isopropyle. Dans la dernière partie de ce mémoire, les travaux de Lise Brethous sur la synthèse de l’hodgsonox ont été repris. Celle-ci avait montré que le cycle à 5 membres pouvait être obtenu à partir de l’a-acétyl g-butyrolactone et que la formation du bicycle pouvait être réalisée par une réaction catalytique d’insertion d’un composé diazoïque dans un lien O-H. Certaines de ces étapes ont été optimisées et différents tests ont été effectués pour réaliser les dernières étapes de la synthèse de l’hodgosonox, mais sans succès.
Resumo:
The technique of delineating Populus tremuloides (Michx.) clonal colonies based on morphology and phenology has been utilized in many studies and forestry applications since the 1950s. Recently, the availability and robustness of molecular markers has challenged the validity of such approaches for accurate clonal identification. However, genetically sampling an entire stand is largely impractical or impossible. For that reason, it is often necessary to delineate putative genet boundaries for a more selective approach when genetically analyzing a clonal population. Here I re-evaluated the usefulness of phenotypic delineation by: (1) genetically identifying clonal colonies using nuclear microsatellite markers, (2) assessing phenotypic inter- and intraclonal agreement, and (3) determining the accuracy of visible characters to correctly assign ramets to their respective genets. The long-term soil productivity study plot 28 was chosen for analysis and is located in the Ottawa National Forest, MI (46° 37'60.0" N, 89° 12'42.7" W). In total, 32 genets were identified from 181 stems using seven microsatellite markers. The average genet size was 5.5 ramets and six of the largest were selected for phenotypic analyses. Phenotypic analyses included budbreak timing, DBH, bark thickness, bark color or brightness, leaf senescence, leaf serrations, and leaf length ratio. All phenotypic characters, except for DBH, were useful for the analysis of inter- and intraclonal variation and phenotypic delineation. Generally, phenotypic expression was related to genotype with multiple response permutation procedure (MRPP) intraclonal distance values ranging from 0.148 and 0.427 and an observed MRPP delta value=0.221 when the expected delta=0.5. The phenotypic traits, though, overlapped significantly among some clones. When stems were assigned into phenotypic groups, six phenotypic groups were identified with each group containing a dominant genotype or clonal colony. All phenotypic groups contained stems from at least two clonal colonies and no clonal colony was entirely contained within one phenotypic group. These results demonstrate that phenotype varies with genotype and stand clonality can be determined using phenotypic characters, but phenotypic delineation is less precise. I therefore recommend that some genetic identification follow any phenotypic delineation. The amount of genetic identification required for clonal confirmation is likely to vary based on stand and environmental conditions. Further analysis, however, is needed to test these findings in other forest stands and populations.