958 resultados para PARTITION
Resumo:
Chemically resolved submicron (PM1) particlemass fluxes were measured by eddy covariance with a high resolution time-of-flight aerosolmass spectrometer over temperate and tropical forests during the BEARPEX-07 and AMAZE-08 campaigns. Fluxes during AMAZE-08 were small and close to the detection limit (<1 ng m−2 s−1) due to low particle mass concentrations (<1 μg m−3). During BEARPEX-07, concentrations were five times larger, with mean mid-day deposition fluxes of −4.8 ng m−2 s−1 for total nonrefractory PM1 (Vex,PM1 = −1 mm s−1) and emission fluxes of +2.6 ng m−2 s−1 for organic PM1 (Vex,org = +1 mm s−1). Biosphere–atmosphere fluxes of different chemical components are affected by in-canopy chemistry, vertical gradients in gas-particle partitioning due to canopy temperature gradients, emission of primary biological aerosol particles, and wet and dry deposition. As a result of these competing processes, individual chemical components had fluxes of varying magnitude and direction during both campaigns. Oxygenated organic components representing regionally aged aerosol deposited, while components of fresh secondary organic aerosol (SOA) emitted. During BEARPEX-07, rapid incanopy oxidation caused rapid SOA growth on the timescale of biosphere-atmosphere exchange. In-canopy SOA mass yields were 0.5–4%. During AMAZE-08, the net organic aerosol flux was influenced by deposition, in-canopy SOA formation, and thermal shifts in gas-particle partitioning.Wet deposition was estimated to be an order ofmagnitude larger than dry deposition during AMAZE-08. Small shifts in organic aerosol concentrations from anthropogenic sources such as urban pollution or biomass burning alters the balance between flux terms. The semivolatile nature of the Amazonian organic aerosol suggests a feedback in which warmer temperatures will partition SOA to the gas-phase, reducing their light scattering and thus potential to cool the region.
Resumo:
This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Resumo:
This volume is a collection of the work done in a three years-lasting PhD, focused in the analysis of Central and Southern Adriatic marine sediments, deriving from the collection of a borehole and many cores, achieved thanks to the good seismic-stratigraphic knowledge of the study area. The work was made out within European projects EC-EURODELTA (coordinated by Fabio Trincardi, ISMAR-CNR), EC-EUROSTRATAFORM (coordinated by Phil P. E. Weaver, NOC, UK), and PROMESS1 (coordinated by Serge Bernè, IFREMER, France). The analysed sedimentary successions presented highly expanded stratigraphic intervals, particularly for the last 400 kyr, 60 kyr and 6 kyr BP. These three different time-intervals resulted in a tri-partition of the PhD thesis. The study consisted of the analysis of planktic and benthic foraminifers’ assemblages (more than 560 samples analysed), as well as in preparing the material for oxygen and carbon stable isotope analyses, and interpreting and discussing the obtained dataset. The chronologic framework of the last 400 kyr was achieved for borehole PRAD1-2 (within the work-package WP6 of PROMESS1 project), collected in 186.5 m water depth. The proposed chronology derives from a multi-disciplinary approach, consisting of the integration of numerous and independent proxies, some of which analysed by other specialists within the project. The final framework based on: micropaleontology (calcareous nannofossils and foraminifers’ bioevents), climatic cyclicity (foraminifers’ assemblages), geochemistry (oxygen stable isotope, made out on planktic and benthic records), paleomagnetism, radiometric ages (14C AMS), teprhochronology, identification of sapropel-equivalent levels (Se). It’s worth to note the good consistency between the oxygen stable isotope curve obtained for borehole PRAD1-2 and other deeper Mediterranean records. The studied proxies allowed the recognition of all the isotopic intervals from MIS10 to MIS1 in PRAD1-2 record, and the base of the borehole has been ascribed to the early MIS11. Glacial and interglacial intervals identified in the Central Adriatic record have been analysed in detail for the paleo-environmental reconstruction, as well. For instance, glacial stages MIS6, MIS8 and MIS10 present peculiar foraminifers’ assemblages, composed by benthic species typical of polar regions and no longer living in the Central Adriatic nowadays. Moreover, a deepening trend in the paleo-bathymetry during glacial intervals was observed, from MIS10 (inner-shelf environment) to MIS4 (mid-shelf environment).Ten sapropel-equivalent levels have been recognised in PRAD1-2 Central Adriatic record. They showed different planktic foraminifers’ assemblages, which allowed the first distinction of events occurred during warm-climate (Se5, Se7), cold-climate (Se4, Se6 and Se8) and temperate-intermediate-climate (Se1, Se3, Se9, Se’, Se10) conditions, consistently with literature. Cold-climate sapropel equivalents are characterised by the absence of an oligotrophic phase, whereas warm-temeprate-climate sapropel equivalents present both the oligotrophic and the eutrophic phases (except for Se1). Sea floor conditions vary, according to benthic foraminifers’ assemblages, from relatively well oxygenated (Se1, Se3), to dysoxic (Se9, Se’, Se10), to highly dysoxic (Se4, Se6, Se8) to events during which benthic foraminifers are absent (Se5, Se7). These two latter levels are also characterised by the lamination of the sediment, feature never observed in literature in such shallow records. The enhanced stratification of the water column during the events Se8, Se7, Se6, Se5, Se4, and the concurring strong dilution of shallow water, pointed out by the isotope record, lead to the hypothesis of a period of intense precipitation in the Central Adriatic region, possibly due to a northward shift of the African Monsoon. Finally, the expression of Central Adriatic PRAD1-2 Se5 equivalent was compared with the same event, as registered in other Eastern Mediterranean areas. The sequence of substantially the same planktic foraminifers’ bioevents has been consistently recognised, indicating a similar evolution of the water column all over the Eastern Mediterranean; yet, the synchronism of these events cannot be demonstrated. A high resolution analysis of late Holocene (last 6000 years BP) climate change was carried out for the Adriatic area, through the recognition of planktic and benthic foraminifers’ bioevents. In particular, peaks of planktic Globigerinoides sacculifer (four during the last 5500 years BP in the most expanded core) have been interpreted, based on the ecological requirements of this species, as warm-climate, arid intervals, correspondent to periods of relative climatic optimum, such as, for instance, the Medieval Warm Period, the Roman Age, the Late Bronze Age and the Copper Age. Consequently, the minima in the abundance of this biomarker could correspond to relatively cooler and more rainy periods. These conclusions are in good agreement with the isotopic and the pollen data. The Last Occurrence (LO) of G. sacculifer has been dated in this work at an average age of 550 years BP, and it is the best bioevent approximating the base of the Little Ice Age in the Adriatic. Recent literature reports the same bioevent in the Levantine Basin, showing a rather consistent age. Therefore, the LO of G. sacculifer has the potential to be extended to all the Eastern Mediterranean. Within the Little Ice Age, benthic foraminifer V. complanata shows two distinct peaks in the shallower Adriatic cores analysed, collected hundred kilometres apart, inside the mud belt environment. Based on the ecological requirements of this species, these two peaks have been interpreted as the more intense (cold and rainy) oscillations inside the LIA. The chronologic framework of the analysed cores is robust, being based on several range-finding 14C AMS ages, on estimates of the secular variation of the magnetic field, on geochemical estimates of the activity depth of 210Pb short-lived radionuclide (for the core-top ages), and is in good agreement with tephrochronologic, pollen and foraminiferal data. The intra-holocenic climate oscillations find out in the Adriatic have been compared with those pointed out in literature from other records of the Northern Hemisphere, and the chronologic constraint seems quite good. Finally, the sedimentary successions analysed allowed the review and the update of the foraminifers’ ecobiostratigraphy available from literature for the Adriatic region, thanks to the achievement of 16 ecobiozones for the last 60 kyr BP. Some bioevents are restricted to the Central Adriatic (for instance the LO of benthic Hyalinea balthica , approximating the MIS3/MIS2 boundary), others occur all over the Adriatic basin (for instance the LO of planktic Globorotalia inflata during MIS3, individuating Dansgaard-Oeschger cycle 8 (Denekamp)).
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.
Resumo:
The main object of this thesis is the analysis and the quantization of spinning particle models which employ extended ”one dimensional supergravity” on the worldline, and their relation to the theory of higher spin fields (HS). In the first part of this work we have described the classical theory of massless spinning particles with an SO(N) extended supergravity multiplet on the worldline, in flat and more generally in maximally symmetric backgrounds. These (non)linear sigma models describe, upon quantization, the dynamics of particles with spin N/2. Then we have analyzed carefully the quantization of spinning particles with SO(N) extended supergravity on the worldline, for every N and in every dimension D. The physical sector of the Hilbert space reveals an interesting geometrical structure: the generalized higher spin curvature (HSC). We have shown, in particular, that these models of spinning particles describe a subclass of HS fields whose equations of motions are conformally invariant at the free level; in D = 4 this subclass describes all massless representations of the Poincar´e group. In the third part of this work we have considered the one-loop quantization of SO(N) spinning particle models by studying the corresponding partition function on the circle. After the gauge fixing of the supergravity multiplet, the partition function reduces to an integral over the corresponding moduli space which have been computed by using orthogonal polynomial techniques. Finally we have extend our canonical analysis, described previously for flat space, to maximally symmetric target spaces (i.e. (A)dS background). The quantization of these models produce (A)dS HSC as the physical states of the Hilbert space; we have used an iterative procedure and Pochhammer functions to solve the differential Bianchi identity in maximally symmetric spaces. Motivated by the correspondence between SO(N) spinning particle models and HS gauge theory, and by the notorious difficulty one finds in constructing an interacting theory for fields with spin greater than two, we have used these one dimensional supergravity models to study and extract informations on HS. In the last part of this work we have constructed spinning particle models with sp(2) R symmetry, coupled to Hyper K¨ahler and Quaternionic-K¨ahler (QK) backgrounds.
Resumo:
[EN]Freshman students always present lower success rates than other levels of students. Digital systems is a course usually taught at first year studentsand its success rate is not very high. In this work we introduce three digital tools to improve freshman learning designed for easy use and one of them is a tool for mobile terminals that can be used as a game. The first tool is ParTec and is used to implement and test the partition technique. This technique is used to eliminate redundant states in finite state machines. This is a repetitive task that students do not like to perform. The second tool is called KarnUMa and is used for simplifying logic functions through Karnaugh Maps. Simplifying logical functions is a core task for this course and although students usually perform this task better than other tasks, it can still be improved. The third tool is a version of KarnUMa, designed for mobile devices. All the tools are available online for download and have been a helpful tool for students.
Resumo:
Tuber borchii (Ascomycota, order Pezizales) is highly valued truffle sold in local markets in Italy. Despite its economic importance, knowledge on its distribution and population variation is scarce. The objective of this work was to investigate the evolutionary forces shaping the genetic structure of this fungus using coalescent and phylogenetic methods to reconstruct the evolutionary history of populations in Italy. To assess population structure, 61 specimens were collected from 11 different Provinces of Italy. Sampling was stratified across hosts and habitats to maximize coverage in native oak and pine stands and both mychorrizae and fruiting bodies were collected. Samples were identified considering anatomo-morphological characters. DNA was extracted and both multilocus (AFLP) and single-locus (18 loci from rDNA, nDNA, and mtDNA) approaches were used to look for polymorphisms. Screening AFLP profiles, both Jaccard and Dice coefficients of similarity were utilized to transform binary matrix into a distance matrix and then to desume Neighbour-Joining trees. Though these are only preliminary examinations, phylogenetic trees were totally concordant with those deriving from single locus analyses. Phylogenetic analyses of the nuclear loci were performed using maximum likelihood with PAUP and a combined phylogenetic inference, using Bayesian estimation with all nuclear gene regions, was carried out. To reconstruct the evolutionary history, we estimated recurrent migration, migration across the history of the sample, and estimated the mutation and approximate age of mutations in each tree using SNAP Workbench. The combined phylogenetic tree using Bayesian estimation suggests that there are two main haplotypes that are difficult to be differentiated on the basis of morphology, of ecological parameters and symbiontic tree. Between these two lineages, that occur in sympatry within T. borchii populations, there is no evidence of recurrent migration. However, migration over the history of the sample was asymmetrical suggesting that isolation was a result of interrupted gene flow followed by range expansion. Low levels of divergence between the haplotypes indicate that there are likely to be two cryptic species within the T. borchii population sampled. Our results suggest that isolation between populations of T. borchii could have led to reproductive isolation between two lineages. This isolation is likely due to sympatric speciation caused by a multiple colonization from different refugia or a recent isolation. In attempting to determinate whether these haplotypes represent separate species or a partition of the same species we applied Biological and Mechanistic species Concepts. Notwithstanding, further analyses are necessary to evaluate if selection favoured premating or post-mating isolation.
Resumo:
[EN]The application of the Isogeometric Analysis (IA) with T-splines [1] demands a partition of the parametric space, C, in a tiling containing T-junctions denominated T-mesh. The T-splines are used both for the geometric modelization of the physical domain, D, and the basis of the numerical approximation. They have the advantage over the NURBS of allowing local refinement. In this work we propose a procedure to construct T-spline representations of complex domains in order to be applied to the resolution of elliptic PDE with IA. In precedent works [2, 3] we accomplished this task by using a tetrahedral parametrization…
Resumo:
The intensity of regional specialization in specific activities, and conversely, the level of industrial concentration in specific locations, has been used as a complementary evidence for the existence and significance of externalities. Additionally, economists have mainly focused the debate on disentangling the sources of specialization and concentration processes according to three vectors: natural advantages, internal, and external scale economies. The arbitrariness of partitions plays a key role in capturing these effects, while the selection of the partition would have to reflect the actual characteristics of the economy. Thus, the identification of spatial boundaries to measure specialization becomes critical, since most likely the model will be adapted to different scales of distance, and be influenced by different types of externalities or economies of agglomeration, which are based on the mechanisms of interaction with particular requirements of spatial proximity. This work is based on the analysis of the spatial aspect of economic specialization supported by the manufacturing industry case. The main objective is to propose, for discrete and continuous space: i) a measure of global specialization; ii) a local disaggregation of the global measure; and iii) a spatial clustering method for the identification of specialized agglomerations.
Resumo:
ZusammenfassungDie Bildung von mittelozeanischen Rückenbasalten (MORB) ist einer der wichtigsten Stoffflüsse der Erde. Jährlich wird entlang der 75.000 km langen mittelozeanischen Rücken mehr als 20 km3 neue magmatische Kruste gebildet, das sind etwa 90 Prozent der globalen Magmenproduktion. Obwohl ozeanische Rücken und MORB zu den am meisten untersuchten geologischen Themenbereichen gehören, existieren weiterhin einige Streit-fragen. Zu den wichtigsten zählt die Rolle von geodynamischen Rahmenbedingungen, wie etwa Divergenzrate oder die Nähe zu Hotspots oder Transformstörungen, sowie der absolute Aufschmelzgrad, oder die Tiefe, in der die Aufschmelzung unter den Rücken beginnt. Diese Dissertation widmet sich diesen Themen auf der Basis von Haupt- und Spurenelementzusammensetzungen in Mineralen ozeanischer Mantelgesteine.Geochemische Charakteristika von MORB deuten darauf hin, dass der ozeanische Mantel im Stabilitätsfeld von Granatperidotit zu schmelzen beginnt. Neuere Experimente zeigen jedoch, dass die schweren Seltenerdelemente (SEE) kompatibel im Klinopyroxen (Cpx) sind. Aufgrund dieser granatähnlichen Eigenschaft von Cpx wird Granat nicht mehr zur Erklärung der MORB Daten benötigt, wodurch sich der Beginn der Aufschmelzung zu geringeren Drucken verschiebt. Aus diesem Grund ist es wichtig zu überprüfen, ob diese Hypothese mit Daten von abyssalen Peridotiten in Einklang zu bringen ist. Diese am Ozeanboden aufgeschlossenen Mantelfragmente stellen die Residuen des Aufschmelz-prozesses dar, und ihr Mineralchemismus enthält Information über die Bildungs-bedingungen der Magmen. Haupt- und Spurenelementzusammensetzungen von Peridotit-proben des Zentralindischen Rückens (CIR) wurden mit Mikrosonde und Ionensonde bestimmt, und mit veröffentlichten Daten verglichen. Cpx der CIR Peridotite weisen niedrige Verhältnisse von mittleren zu schweren SEE und hohe absolute Konzentrationen der schweren SEE auf. Aufschmelzmodelle eines Spinellperidotits unter Anwendung von üblichen, inkompatiblen Verteilungskoeffizienten (Kd's) können die gemessenen Fraktionierungen von mittleren zu schweren SEE nicht reproduzieren. Die Anwendung der neuen Kd's, die kompatibles Verhalten der schweren SEE im Cpx vorhersagen, ergibt zwar bessere Resultate, kann jedoch nicht die am stärksten fraktionierten Proben erklären. Darüber hinaus werden sehr hohe Aufschmelzgrade benötigt, was nicht mit Hauptelementdaten in Einklang zu bringen ist. Niedrige (~3-5%) Aufschmelzgrade im Stabilitätsfeld von Granatperidotit, gefolgt von weiterer Aufschmelzung von Spinellperidotit kann jedoch die Beobachtungen weitgehend erklären. Aus diesem Grund muss Granat weiterhin als wichtige Phase bei der Genese von MORB betrachtet werden (Kapitel 1).Eine weitere Hürde zum quantitativen Verständnis von Aufschmelzprozessen unter mittelozeanischen Rücken ist die fehlende Korrelation zwischen Haupt- und Spuren-elementen in residuellen abyssalen Peridotiten. Das Cr/(Cr+Al) Verhältnis (Cr#) in Spinell wird im Allgemeinen als guter qualitativer Indikator für den Aufschmelzgrad betrachtet. Die Mineralchemie der CIR Peridotite und publizierte Daten von anderen abyssalen Peridotiten zeigen, dass die schweren SEE sehr gut (r2 ~ 0.9) mit Cr# der koexistierenden Spinelle korreliert. Die Auswertung dieser Korrelation ergibt einen quantitativen Aufschmelz-indikator für Residuen, welcher auf dem Spinellchemismus basiert. Damit kann der Schmelzgrad als Funktion von Cr# in Spinell ausgedrückt werden: F = 0.10×ln(Cr#) + 0.24 (Hellebrand et al., Nature, in review; Kapitel 2). Die Anwendung dieses Indikators auf Mantelproben, für die keine Ionensondendaten verfügbar sind, ermöglicht es, geochemische und geophysikalischen Daten zu verbinden. Aus geodynamischer Perspektive ist der Gakkel Rücken im Arktischen Ozean von großer Bedeutung für das Verständnis von Aufschmelzprozessen, da er weltweit die niedrigste Divergenzrate aufweist und große Transformstörungen fehlen. Publizierte Basaltdaten deuten auf einen extrem niedrigen Aufschmelzgrad hin, was mit globalen Korrelationen im Einklang steht. Stark alterierte Mantelperidotite einer Lokalität entlang des kaum beprobten Gakkel Rückens wurden deshalb auf Primärminerale untersucht. Nur in einer Probe sind oxidierte Spinellpseudomorphosen mit Spuren primärer Spinelle erhalten geblieben. Ihre Cr# ist signifikant höher als die einiger Peridotite von schneller divergierenden Rücken und ihr Schmelzgrad ist damit höher als aufgrund der Basaltzusammensetzungen vermutet. Der unter Anwendung des oben erwähnten Indikators ermittelte Schmelzgrad ermöglicht die Berechnung der Krustenmächtigkeit am Gakkel Rücken. Diese ist wesentlich größer als die aus Schweredaten ermittelte Mächtigkeit, oder die aus der globalen Korrelation zwischen Divergenzrate und mittels Seismik erhaltene Krustendicke. Dieses unerwartete Ergebnis kann möglicherweise auf kompositionelle Heterogenitäten bei niedrigen Schmelzgraden, oder auf eine insgesamt größere Verarmung des Mantels unter dem Gakkel Rücken zurückgeführt werden (Hellebrand et al., Chem.Geol., in review; Kapitel 3).Zusätzliche Informationen zur Modellierung und Analytik sind im Anhang A-C aufgeführt
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
Im Rahmen dieser Arbeit wurde am System Polyethylenoxid / Polypropylenoxid (PEO / PPO) der Einfluß von Copolymeren auf die Grenzflächenspannung Sigma von Homopolymerblends untersucht. Als Additive dienten Triblockcopolymere EO-block-PO-block-EO bzw. PO-block-EO-block-PO, Diblockcopolymere S-block-EO sowie statistische Copolymere EO-ran-PO. Die Additive wurden so ausgewählt, daß sich Paare von Additiven jeweils in genau einer Eigenschaft (Zusammensetzung, Kettenlänge, Blockanordnung) unterscheiden, in allen anderen Parametern jedoch vergleichbar sind. Die Grenzflächenspannung wurde experimentell mit Hilfe der Pendant-Drop-Methode in Abhängigkeit von der Temperatur ermittelt, wobei das Polymer mit der höheren Dichte, PEO, die Tropfenphase und PPO die Matrixphase bildet. Das Additiv wurde bei Messung der Grenzflächenspannung der ternären Systeme in unterschiedlichen Konzentrationen entweder einer oder beiden Homopolymerphasen zugegeben. Die Konzentrationsabhängigkeit von Sigma lässt sich sowohl mit dem Modell von Tang und Huang als auch mit einem Langmuir-analogen Ansatz gut beschreiben.Um den Zusammenhang zwischen sigma und dem Phasenverhalten zu untersuchen, wurden für einige der ternären Systeme Trübungskurven bei 100°C aufgenommen. Der Vergleich zwischen den Phasendiagrammen und den korrespondierenden Werten von sigma weist darauf hin, dass ein Additiv sigma gerade dann wirksam reduziert, wenn es einem Homopolymer zugefügt wird, mit dem es nur begrenzt verträglich ist, da dann die Triebkraft zur Anlagerung an der Grenzfläche besonders ausgeprägt ist. Das bereits bekannte Phänomen, wonach der Wert der Grenzflächenspannung davon abhängig sein kann, in welcher der Phasen das Additiv zu Beginn der Messung vorliegt, wurde ausführlich untersucht. Es wird angenommen, dass das System nicht in jedem Fall das thermodynamische Gleichgewicht erlangt und der beobachtete Effekt auf das Erreichen stationärer Zustände zurückzuführen ist. Dieses Verhalten kann mit einem Modell beschrieben werden, in welches das Viskositätsverhältnis der Homopolymere sowie der Verteilungskoeffizient des Copolymers zwischen den Homopolymerphasen eingehen. Aus Löslichkeitsparametern wurde der binäre Wechselwirkungsparameter Chi PEO/PPO = 0.18 abgeschätzt und mit diesem die theoretischen Werte für sigma zwischen PEO und PPO nach den Modellen von Roe bzw. Helfand und Tagami berechnet. Der Vergleich mit den experimentellen Daten des binären Systems zeigt, dass beide Ansätze sigma-Werte liefern, die in der Größenordnung der experimentellen Daten liegen, hierbei erweist sich der Ansatz von Roe als besonders geeignet. Die Temperaturabhängigkeit der Grenzflächenspannung wird jedoch durch beide Ansätze unzutreffend wiedergegeben. Mit dem Modell von Helfand und Tagami wurden eine Grenzflächendicke von 7.9 à und das Dichteprofil der Grenzfläche berechnet. Für die Copolymere EO92PO56EO92 und S9EO22 (die Indices geben die Zahl der Monomereinheiten an) können die Grenzflächenüberschusskonzentrationen, die kritische Mizellenkonzentration sowie der einem Additivmolekül an der Grenzschicht zur Verfügung stehende Platz bestimmt werden.Der Vergleich unterschiedlicher Copolymere hinsichtlich ihrer Fähigkeit, sigma wirkungsvoll herabzusetzen, zeigt, dass im Fall von Triblockcopolymeren die Anordnung der Blöcke gegenüber der Zusammensetzung eine untergeordnete Rolle spielt. Mit zunehmender Kettenlänge nimmt die Effektivität als Compatibilizer sowohl bei Blockcopolymeren als auch bei statistischen Copolymeren zu.
Resumo:
In the last years, sustainable horticulture has been increasing; however, to be successful this practice needs an efficient soil fertility management to maintain a high productivity and fruit quality standards. For this purpose composted organic materials from agri-food industry and municipal solid waste has been used as a source to replace chemical fertilizers and increase soil organic matter. To better understand the influence of compost application on soil fertility and plant growth, we carried out a study comparing organic and mineral nitrogen (N) fertilization in micro propagated plants, potted trees and commercial peach orchard with these aims: 1. evaluation of tree development, CO2 fixation and carbon partition to the different organs of two-years-old potted peach trees. 2. Determination of soil N concentration and nitrate-N effect on plant growth and root oxidative stress of micro propagated plant after increasing rates of N applications. 3. Assessment of soil chemical and biological fertility, tree growth and yield and fruit quality in a commercial orchard. The addition of compost at high rate was effective in increasing CO2 fixation, promoting root growth, shoot and fruit biomass. Furthermore, organic fertilizers influenced C partitioning, favoring C accumulation in roots, wood and fruits. The higher CO2 fixation was the result of a larger tree leaf area, rather than an increase in leaf photosynthetic efficiency, showing a stimulation of plant growth by application of compost. High concentrations of compost increased total soil N concentration, but were not effective in increasing nitrate-N soil concentration; in contrast mineral-N applications increased linearly soil nitrate-N, even at the lowest rate tested. Soil nitrate-N concentration influenced positively plant growth at low rate (60- 80 mg kg-1), whereas at high concentrations showed negative effects. In this trial, the decrease of root growth, as a response to excessive nitrate-N soil concentration, was not anticipated by root oxidative stress. Continuous annual applications of compost for 10 years enhanced soil organic matter content and total soil N concentration. Additionally, high rate of compost application (10 t ha-1 year-1) enhanced microbial biomass. On the other hand, different fertilizers management did not modify tree yield, but influenced fruit size and precocity index. The present data support the idea that organic fertilizers can be used successfully as a substitute of mineral fertilizers in fruit tree nutrient management, since they promote an increase of soil chemical and biological fertility, prevent excessive nitrate-N soil concentration, promote plant growth and potentially C sequestration into the soil.
Resumo:
There are different ways to do cluster analysis of categorical data in the literature and the choice among them is strongly related to the aim of the researcher, if we do not take into account time and economical constraints. Main approaches for clustering are usually distinguished into model-based and distance-based methods: the former assume that objects belonging to the same class are similar in the sense that their observed values come from the same probability distribution, whose parameters are unknown and need to be estimated; the latter evaluate distances among objects by a defined dissimilarity measure and, basing on it, allocate units to the closest group. In clustering, one may be interested in the classification of similar objects into groups, and one may be interested in finding observations that come from the same true homogeneous distribution. But do both of these aims lead to the same clustering? And how good are clustering methods designed to fulfil one of these aims in terms of the other? In order to answer, two approaches, namely a latent class model (mixture of multinomial distributions) and a partition around medoids one, are evaluated and compared by Adjusted Rand Index, Average Silhouette Width and Pearson-Gamma indexes in a fairly wide simulation study. Simulation outcomes are plotted in bi-dimensional graphs via Multidimensional Scaling; size of points is proportional to the number of points that overlap and different colours are used according to the cluster membership.