914 resultados para Institutional and structural problems
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.
Resumo:
This work studies the impact of two traditional Romanian treatments, Red Petroleum and Propolis, in terms of real efficiency and consequence on the wooden artifacts. The application of these solutions is still a widely adopted and popular technique in preservative conservation but the impact of these solutions is not well known. It is important to know the effect of treatments on chemical-physical and structural characteristics of the artifacts, not only for understanding the influence on present conditions but also for foreseeing the future behavior. These treatments with Romanian traditional products are compared with a commercial antifungal product, Biotin R, which is utilized as reference to control the effectiveness of Red Petroleum and Propolis. Red Petroleum and Propolis are not active against mould while Biotin R is very active. Mould attack is mostly concentrated in the painted layer, where the tempera, containing glue and egg, enhance nutrition availability for moulds. Biotin R, even if is not a real insecticide but a fungicide, was the most active product against insect attack of the three products, followed by Red Petroleum, Propolis and untreated reference. As for colour, it did not change so much after the application of Red Petroleum and Biotin R and the colour difference was almost not perceptible. On the contrary, Propolis affected the colour a lot. During the exposure at different RH, the colour changes significantly at 100% RH at equilibrium and this is mainly due to the mould attack. Red Petroleum penetrates deeply into wood, while Propolis does not penetrate and remains only on the surface. However, Red Petroleum does not interact chemically with wood substance and it is easy volatilized in oven-dry condition. On the contrary Propolis interacts chemically with wood substance and hardly volatilized, even in oven-dry condition and consequently Propolis remains where it penetrated, mostly on the surface. Treatment by immersion has impact on wood physical parameters while treatment by brushing does not have significant impact. Especially Red Petroleum has an apparent impact on moisture content (MC) due to the penetration of solution, while Propolis does not penetrate so much and remains only on surface therefore Propolis does not have so much impact as Red Petroleum. However, if the weight of the solution penetrated in wood is eliminated, there is not significant difference in MC between treated and untreated samples. Considering physical parameters, dimensional stability is an important parameter. The variation of wood moisture content causes shrinkages/swelling of the wood that polychrome layer can only partially follow. The dimension of wooden supports varied under different moisture conditioning; the painted layer cannot completely follow this deformation, and consequently a degradation and deterioration caused by detachment, occurs. That detachment affects the polychrome stratification of the panel painting and eventually the connections between the different layer compositions of the panel painting.
Resumo:
Crew scheduling and crew rostering are similar and related problems which can be solved by similar procedures. So far, the existing solution methods usually create a model for each one of these problems (scheduling and rostering), and when they are solved together in some cases an interaction between models is considered in order to obtain a better solution. A single set covering model to solve simultaneously both problems is presented here, where the total quantity of drivers needed is directly considered and optimized. This integration allows to optimize all of the depots at the same time, while traditional approaches needed to work depot by depot, and also it allows to see and manage the relationship between scheduling and rostering, which was known in some degree but usually not easy to quantify as this model permits. Recent research in the area of crew scheduling and rostering has stated that one of the current challenges to be achieved is to determine a schedule where crew fatigue, which depends mainly on the quality of the rosters created, is reduced. In this approach rosters are constructed in such way that stable working hours are used in every week of work, and a change to a different shift is done only using free days in between to make easier the adaptation to the new working hours. Computational results for real-world-based instances are presented. Instances are geographically diverse to test the performance of the procedures and the model in different scenarios.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Iberia Africa plate boundary, cross, roughly W-E, connecting the eastern Atlantic Ocean from Azores triple junction to the Continental margin of Morocco. Relative movement between the two plate change along the boundary, from transtensive near the Azores archipelago, through trascurrent movement in the middle at the Gloria Fracture Zone, to transpressive in the Gulf of Cadiz area. This study presents the results of geophysical and geological analysis on the plate boundary area offshore Gibraltar. The main topic is to clarify the geodynamic evolution of this area from Oligocene to Quaternary. Recent studies have shown that the new plate boundary is represented by a 600 km long set of aligned, dextral trascurrent faults (the SWIM lineaments) connecting the Gloria fault to the Riff orogene. The western termination of these lineaments crosscuts the Gibraltar accretionary prism and seems to reach the Moroccan continental shelf. In the past two years newly acquired bathymetric data collected in the Moroccan offshore permit to enlighten the present position of the eastern portion of the plate boundary, previously thought to be a diffuse plate boundary. The plate boundary evolution, from the onset of compression in the Oligocene to the Late Pliocene activation of trascurrent structures, is not yet well constrained. The review of available seismics lines, gravity and bathymetric data, together with the analysis of new acquired bathymetric and high resolution seismic data offshore Morocco, allows to understand how the deformation acted at lithospheric scale under the compressive regime. Lithospheric folding in the area is suggested, and a new conceptual model is proposed for the propagation of the deformation acting in the brittle crust during this process. Our results show that lithospheric folding, both in oceanic and thinned continental crust, produced large wavelength synclines bounded by short wavelength, top thrust, anticlines. Two of these anticlines are located in the Gulf of Cadiz, and are represented by the Gorringe Ridge and Coral Patch seamounts. Lithospheric folding probably interacted with the Monchique – Madeira hotspot during the 72 Ma to Recent, NNE – SSW transit. Plume related volcanism is for the first time described on top of the Coral Patch seamount, where nine volcanoes are found by means of bathymetric data. 40Ar-39Ar age of 31.4±1.98 Ma are measured from one rock sample of one of these volcanoes. Analysis on biogenic samples show how the Coral Patch act as a starved offshore seamount since the Chattian. We proposed that compression stress formed lithospheric scale structures playing as a reserved lane for the upwelling of mantle material during the hotspot transit. The interaction between lithospheric folding and the hotspot emplacement can be also responsible for the irregularly spacing, and anomalous alignments, of individual islands and seamounts belonging to the Monchique - Madeira hotspot.
Resumo:
This thesis focusses on the tectonic evolution and geochronology of part of the Kaoko orogen, which is part of a network of Pan-African orogenic belts in NW Namibia. By combining geochemical, isotopic and structural analysis, the aim was to gain more information about how and when the Kaoko Belt formed. The first chapter gives a general overview of the studied area and the second one describes the basis of the Electron Probe Microanalysis dating method. The reworking of Palaeo- to Mesoproterozoic basement during the Pan-African orogeny as part of the assembly of West Gondwana is discussed in Chapter 3. In the study area, high-grade rocks occupy a large area, and the belt is marked by several large-scale structural discontinuities. The two major discontinuities, the Sesfontein Thrust (ST) and the Puros Shear Zone (PSZ), subdivide the orogen into three tectonic units: the Eastern Kaoko Zone (EKZ), the Central Kaoko Zone (CKZ) and the Western Kaoko Zone (WKZ). An important lineament, the Village Mylonite Zone (VMZ), has been identified in the WKZ. Since plutonic rocks play an important role in understanding the evolution of a mountain belt, zircons from granitoid gneisses were dated by conventional U-Pb, SHRIMP and Pb-Pb techniques to identify different age provinces. Four different age provinces were recognized within the Central and Western part of the belt, which occur in different structural positions. The VMZ seems to mark the limit between Pan-African granitic rocks east of the lineament and Palaeo- to Mesoproterozoic basement to the west. In Chapter 4 the tectonic processes are discussed that led to the Neoproterozoic architecture of the orogen. The data suggest that the Kaoko Belt experienced three main phases of deformation, D1-D3, during the Pan-African orogeny. Early structures in the central part of the study area indicate that the initial stage of collision was governed by underthrusting of the medium-grade Central Kaoko zone below the high-grade Western Kaoko zone, resulting in the development of an inverted metamorphic gradient. The early structures were overprinted by a second phase D2, which was associated with the development of the PSZ and extensive partial melting and intrusion of ~550 Ma granitic bodies in the high-grade WKZ. Transcurrent deformation continued during cooling of the entire belt, giving rise to the localized low-temperature VMZ that separates a segment of elevated Mesoproterozoic basement from the rest of the Western zone in which only Pan-African ages have so far been observed. The data suggest that the boundary between the Western and Central Kaoko zones represents a modified thrust zone, controlling the tectonic evolution of the Kaoko belt. The geodynamic evolution and the processes that generated this belt system are discussed in Chapter 5. Nd mean crustal residence ages of granitoid rocks permit subdivision of the belt into four provinces. Province I is characterised by mean crustal residence ages <1.7 Ga and is restricted to the Neoproterozoic granitoids. A wide range of initial Sr isotopic values (87Sr/86Sri = 0.7075 to 0.7225) suggests heterogeneous sources for these granitoids. The second province consists of Mesoproterozoic (1516-1448 Ma) and late Palaeo-proterozoic (1776-1701 Ma) rocks and is probably related to the Eburnian cycle with Nd model ages of 1.8-2.2 Ga. The eNd i values of these granitoids are around zero and suggest a predominantly juvenile source. Late Archaean and middle Palaeoproterozoic rocks with model ages of 2.5 to 2.8 Ga make up Province III in the central part of the belt and are distinct from two early Proterozoic samples taken near the PSZ which show even older TDM ages of ~3.3 Ga (Province IV). There is no clear geological evidence for the involvement of oceanic lithosphere in the formation of the Kaoko-Dom Feliciano orogen. Chapter 6 presents the results of isotopic analyses of garnet porphyroblasts from high-grade meta-igneous and metasedimentary rocks of the sillimanite-K-feldspar zone. Minimum P-T conditions for peak metamorphism were calculated at 731±10 °C at 6.7±1.2 kbar, substantially lower than those previously reported. A Sm-Nd garnet-whole rock errorchron obtained on a single meta-igneous rock yielded an unexpectedly old age of 692±13 Ma, which is interpreted as an inherited metamorphic age reflecting an early Pan-African granulite-facies event. The dated garnets survived a younger high-grade metamorphism that occurred between ca. 570 and 520 Ma and apparently maintained their old Sm-Nd isotopic systematics, implying that the closure temperature for garnet in this sample was higher than 730 °C. The metamorphic peak of the younger event was dated by electronmicroprobe on monazite at 567±5 Ma. From a regional viewpoint, it is possible that these granulites of igneous origin may be unrelated to the early Pan-African metamorphic evolution of the Kaoko Belt and may represent a previously unrecognised exotic terrane.
Resumo:
This thesis is a collection of essays related to the topic of innovation in the service sector. The choice of this structure is functional to the purpose of single out some of the relevant issues and try to tackle them, revising first the state of the literature and then proposing a way forward. Three relevant issues has been therefore selected: (i) the definition of innovation in the service sector and the connected question of measurement of innovation; (ii) the issue of productivity in services; (iii) the classification of innovative firms in the service sector. Facing the first issue, chapter II shows how the initial width of the original Schumpeterian definition of innovation has been narrowed and then passed to the service sector form the manufacturing one in a reduce technological form. Chapter III tackle the issue of productivity in services, discussing the difficulties for measuring productivity in a context where the output is often immaterial. We reconstruct the dispute on the Baumol’s cost disease argument and propose two different ways to go forward in the research on productivity in services: redefining the output along the line of a characteristic approach; and redefining the inputs, particularly analysing which kind of input it’s worth saving. Chapter IV derives an integrated taxonomy of innovative service and manufacturing firms, using data coming from the 2008 CIS survey for Italy. This taxonomy is based on the enlarged definition of “innovative firm” deriving from the Schumpeterian definition of innovation and classify firms using a cluster analysis techniques. The result is the emergence of a four cluster solution, where firms are differentiated by the breadth of the innovation activities in which they are involved. Chapter 5 reports some of the main conclusions of each singular previous chapter and the points worth of further research in the future.
Resumo:
Over the past years fruit and vegetable industry has become interested in the application of both osmotic dehydration and vacuum impregnation as mild technologies because of their low temperature and energy requirements. Osmotic dehydration is a partial dewatering process by immersion of cellular tissue in hypertonic solution. The diffusion of water from the vegetable tissue to the solution is usually accompanied by the simultaneous solutes counter-diffusion into the tissue. Vacuum impregnation is a unit operation in which porous products are immersed in a solution and subjected to a two-steps pressure change. The first step (vacuum increase) consists of the reduction of the pressure in a solid-liquid system and the gas in the product pores is expanded, partially flowing out. When the atmospheric pressure is restored (second step), the residual gas in the pores compresses and the external liquid flows into the pores. This unit operation allows introducing specific solutes in the tissue, e.g. antioxidants, pH regulators, preservatives, cryoprotectancts. Fruit and vegetable interact dynamically with the environment and the present study attempts to enhance our understanding on the structural, physico-chemical and metabolic changes of plant tissues upon the application of technological processes (osmotic dehydration and vacuum impregnation), by following a multianalytical approach. Macro (low-frequency nuclear magnetic resonance), micro (light microscopy) and ultrastructural (transmission electron microscopy) measurements combined with textural and differential scanning calorimetry analysis allowed evaluating the effects of individual osmotic dehydration or vacuum impregnation processes on (i) the interaction between air and liquid in real plant tissues, (ii) the plant tissue water state and (iii) the cell compartments. Isothermal calorimetry, respiration and photosynthesis determinations led to investigate the metabolic changes upon the application of osmotic dehydration or vacuum impregnation. The proposed multianalytical approach should enable both better designs of processing technologies and estimations of their effects on tissue.
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.
Resumo:
Diamant ist das härteste Mineral – und dazu ein Edelstein -, das unter höchstem Druck und hohen Temperaturen in tiefen kontinentalen Regionen der Erde kristallisiert. Die Mineraleinschlüsse in Diamanten werden durch die physikalische Stabilität und chemische Beständigkeit der umgebenden – eigentlich metastabilen -Diamant-Phase geschützt. Aufgrund der koexistierenden Phasenkombination ermöglichen sie, die Mineral-Entwicklung zu studieren, während deren der Einschlüssen und die Diamanten kristallisierten. rnDie Phasenkombinationen von Diamant und Chrom-Pyrop, Chrom-Diopsid, Chromit, Olivin, Graphit und Enstatit nebeneinander (teilweise in Berührungsexistenz) mit Chrom-Pyrop Einschlüssen wurden von neunundzwanzig Diamant-Proben von sechs Standorten in Südafrika (Premier, Koffiefontein, De Beers Pool, Finsch, Venetia und Koingnaas Minen) und Udachnaya (Sibirien/Russland) identifiziert und charakterisiert. Die Mineraleinschlüsse weisen z.T. kubo-oktaedrische Form auf, die unabhängig von ihren eigenen Kristallsystemen ausgebildet werden können. Das bedeutet, dass sie syngenetische Einschlüsse sind, die durch die sehr hohe Formenergie des umgebenden Diamanten morphologisch unter Zwang stehen. Aus zweidiemnsionalen Messungen der ersten Ordnung von charakteristischen Raman-Banden lassen sich relative Restdrucke in Diamanten zwischen Diamant und Einschlussmineral gewinnen; sie haben charakteristische Werte von ca. 0,4 bis 0,9 GPa um Chrom-Pyrop-Einschlüsse, 0,6 bis 2,0 GPa um Chrom-Diopsid-Einschlüsse, 0,3 bis 1,2 GPa um Olivin-Einschlüsse, 0,2 bis 1,0 GPa um Chromit-Einschlüsse, beziehungsweise 0,5 GPa um Graphit Einschlüsse.rnDie kristallstrukturellen Beziehung von Diamanten und ihren monomineralischen Einschlüssen wurden mit Hilfe der Quantifizierung der Winkelkorrelationen zwischen der [111] Richtung von Diamanten und spezifisch ausgewählten Richtungen ihrer mineralischen Einschlüsse untersucht. Die Winkelkorrelationen zwischen Diamant [111] und Chrom-Pyrop [111] oder Chromit [111] zeigen die kleinsten Verzerrungen von 2,2 bis zu 3,4. Die Chrom-Diopsid- und Olivin-Einschlüsse zeigen die Missorientierungswerte mit Diamant [111] bis zu 10,2 und 12,9 von Chrom-Diopsid [010] beziehungsweise Olivin [100].rnDie chemische Zusammensetzung von neun herausgearbeiteten (orientiertes Anschleifen) Einschlüssen (drei Chrom-Pyrop-Einschlüsse von Koffiefontein-, Finsch- und Venetia-Mine (zwei von drei koexistieren nebeneinander mit Enstatit), ein Chromit von Udachnaya (Sibirien/Russland), drei Chrom-Diopside von Koffiefontein, Koingnaas und Udachnaya (Sibirien/Russland) und zwei Olivin Einschlüsse von De Beers Pool und Koingnaas) wurden mit Hilfe EPMA und LA-ICP-MS analysiert. Auf der Grundlage der chemischen Zusammensetzung können die Mineraleinschlüsse in Diamanten in dieser Arbeit der peridotitischen Suite zugeordnet werden.rnDie Geothermobarometrie-Untersuchungen waren aufgrund der berührenden Koexistenz von Chrom-Pyrop- und Enstatit in einzelnen Diamanten möglich. Durchschnittliche Temperaturen und Drücke der Bildung sind mit ca. 1087 (± 15) C, 5,2 (± 0,1) GPa für Diamant DHK6.2 von der Koffiefontein Mine beziehungsweise ca. 1041 (± 5) C, 5,0 (± 0,1) GPa für Diamant DHF10.2 von der Finsch Mine zu interpretieren.rn
Resumo:
This thesis is focused on the paleomagnetic rotation pattern inside the deforming zone of strike-slip faults, and the kinematics and geodynamics describing it. The paleomagnetic investigation carried out along both the LOFZ and the fore-arc sliver (38º-42ºS, southern Chile) revealed an asymmetric rotation pattern. East of the LOFZ and adjacent to it, rotations are up to 170° clockwise (CW) and fade out ~10 km east of fault. West of the LOFZ at 42ºS (Chiloé Island) and around 39°S (Villarrica domain) systematic CCW rotations have been observed, while at 40°-41°S (Ranco-Osorno domain) and adjacent to the LOFZ CW rotations reach up to 136° before evolving to CCW rotations at ~30 km from the fault. These data suggest a directed relation with subduction interface plate coupling. Zones of high coupling yield to a wide deforming zone (~30 km) west of the LOFZ characterized by CW rotations. Low coupling implies a weak LOFZ and a fore-arc dominated by CCW rotations related to NW-sinistral fault kinematics. The rotation pattern is consistent with a quasi-continuous crust kinematics. However, it seems unlikely that the lower crust flux can control block rotation in the upper crust, considering the cold and thick fore-arc crust. I suggest that rotations are consequence of forces applied directly on both the block edges and along the main fault, within the upper crust. Farther south, at the Austral Andes (54°S) I measured the anisotropy of magnetic susceptibility (AMS) of 22 Upper Cretaceous to Upper Eocene sites from the Magallanes fold-thrust belt internal domains. The data document continuous compression from the Early Cretaceous until the Late Oligocene. AMS data also show that the tectonic inversion of Jurassic extensional faults during the Late Cretaceous compressive phase may have controlled the Cenozoic kinematic evolution of the Magallanes fold-thrust belt, yielding slip partitioning.