967 resultados para Multiple-trait Evolution
Resumo:
Ideas about the evolution of imperfect mimicry are reviewed. Their relevance to the colours patterns of hoverflies (Diptera, Syrphidae) are discussed in detail. Most if not all of the hoverflies labelled as mimetic actually are mimics. The apparently poor nature of their resemblance does not prevent them from obtaining at least some protection from suitably experienced birds. Mimicry is a dominant theme of this very large family of Diptera, with at least a quarter of all species in Europe being mimetic. Hoverfly mimics fall into three major groups according to their models, involving bumblebees, honeybees and social wasps. There are striking differences in the general levels of mimetic fidelity and relative abundances of the three groups, with accurate mimicry, low abundance and polymorphism characterizing the bumblebee mimics: more than half of all the species of bumblebee mimics are polymorphic. Mimics of social wasps tend to be poor mimics, have high relative abundance, and polymorphism is completely absent. Bumblebee models fall into a small number of Muellerian mimicry rings which are very different between the Palaearctic and Nearctic regions. Social wasps and associated models form one large Muellerian complex. Together with honeybees, these complexes probably form real clusters of forms as perceived by many birds. All three groups of syrphid mimics contain both good and poor mimics; some mimics are remarkably accurate, and have close morphological and behavioural resemblance. At least some apparently 'poor' mimetic resemblances may be much closer in birds' perception than we imagine, and more work needs to be done on this. Bumblebees are the least noxious and wasps the most noxious of the three main model groups. The basis of noxiousness is different, with bumblebees being classified as non-food, whereas honeybees and wasps are nasty-tasting and (rarely) stinging. The distribution of mimicry is exactly what would be expected from this ordering, with polymorphic and accurate forms being a key feature of mimics of the least noxious models, while highly noxious models have poor-quality mimicry. Even if the high abundance of many syrphid mimics relative to their models is a recent artefact of man-made environmental change, this does not preclude these species from being mimics. It seems unlikely that bird predation actually controls the populations of adult syrphids. Being rare relative to a model may have promoted or accelerated the evolution of perfect mimicry: theoretically this might account for the pattern of rare good mimics and abundant poor ones, but the idea is intrinsically unlikely. Many mimics seem to have hour-to-hour abundances related to those of their models, presumably as a result of behavioural convergence. We need to know much more about the psychology of birds as predators. There are at least four processes that need elucidating: (a) learning about the noxiousness of models; (b) the erasing of that learning through contact with mimics (extinction, or learned forgetting); (c) forgetting; (d) deliberate risk-taking and the physiological states that promote it. Johnston's (2002) model of the stabilization of imperfect mimicry by kin selection is unlikely to account for the colour patterns of hoverflies. Sherratt's (2002) model of the influence of multiple models potentially accounts for all the patterns of hoverfly mimicry, and is the most promising avenue for testing.
Resumo:
A recent focus on contemporary evolution and the connections between communities has sought to more closely integrate the fields of ecology and evolutionary biology. Studies of coevolutionary dynamics, life history evolution, and rapid local adaptation demonstrate that ecological circumstances can dictate evolutionary trajectories. Thus, variation in species identity, trait distributions, and genetic composition may be maintained among ecologically divergent habitats. New theories and hypotheses (e.g., metacommunity theory and the Monopolization hypothesis) have been developed to understand better the processes occurring in spatially structured environments and how the movement of individuals among habitats contributes to ecology and evolution at broader scales. As few empirical studies of these theories exist, this work seeks to further test these concepts. Spatial and temporal dispersal are the mechanisms that connect habitats to one another. Both processes allow organisms to leave conditions that are suboptimal or unfavorable, and enable colonization and invasion, species range expansion, and gene flow among populations. Freshwater zooplankton are aquatic crustaceans that typically develop resting stages as part of their life cycle. Their dormant propagules allow organisms to disperse both temporally and among habitats. Additionally, because a number of species are cyclically parthenogenetic, they make excellent model organisms for studying evolutionary questions in a controlled environment. Here, I use freshwater zooplankton communities as model systems to explore the mechanisms and consequences of dispersal and to test these nascent theories on the influence of spatial structure in natural systems. In Chapter one, I use field experiments and mathematical models to determine the range of adult zooplankton dispersal over land and what vectors are moving zooplankton. Chapter two focuses on prolonged dormancy of one aquatic zooplankter, Daphnia pulex. Using statistical models with field and mesocosm experiments, I show that variation in Daphnia dormant egg hatching is substantial among populations in nature, and some of that variation can be attributed to genetic differences among the populations. Chapters three and four explore the consequences of dispersal at multiple levels of biological organization. Chapter three seeks to understand the population level consequences of dispersal over evolutionary time on current patterns of population genetic differentiation. Nearby populations of D. pulex often exhibit high population genetic differentiation characteristic of very low dispersal. I explore two alternative hypotheses that seek to explain this pattern. Finally, chapter four is a case study of how dispersal has influenced patterns of variation at the community, trait and genetic levels of biodiversity in a lake metacommunity.
Resumo:
Terrestrial planets produce crusts as they differentiate. The Earth’s bi-modal crust, with a high-standing granitic continental crust and a low-standing basaltic oceanic crust, is unique in our solar system and links the evolution of the interior and exterior of this planet. Here I present geochemical observations to constrain processes accompanying crustal formation and evolution. My approach includes geochemical analyses, quantitative modeling, and experimental studies. The Archean crustal evolution project represents my perspective on when Earth’s continental crust began forming. In this project, I utilized critical element ratios in sedimentary records to track the evolution of the MgO content in the upper continental crust as a function time. The early Archean subaerial crust had >11 wt. % MgO, whereas by the end of Archean its composition had evolved to about 4 wt. % MgO, suggesting a transition of the upper crust from a basalt-like to a more granite-like bulk composition. Driving this fundamental change of the upper crustal composition is the widespread operation of subduction processes, suggesting the onset of global plate tectonics at ~ 3 Ga (Abstract figure). Three of the chapters in this dissertation leverage the use of Eu anomalies to track the recycling of crustal materials back into the mantle, where Eu anomaly is a sensitive measure of the element’s behavior relative to neighboring lanthanoids (Sm and Gd) during crustal differentiation. My compilation of Sm-Eu-Gd data for the continental crust shows that the average crust has a net negative Eu anomaly. This result requires recycling of Eu-enriched lower continental crust to the mantle. Mass balance calculations require that about three times the mass of the modern continental crust was returned into the mantle over Earth history, possibly via density-driven recycling. High precision measurements of Eu/Eu* in selected primitive glasses of mid-ocean ridge basalt (MORB) from global MORs, combined with numerical modeling, suggests that the recycled lower crustal materials are not found within the MORB source and may have at least partially sank into the lower mantle where they can be sampled by hot spot volcanoes. The Lesser Antilles Li isotope project provides insights into the Li systematics of this young island arc, a representative section of proto-continental crust. Martinique Island lavas, to my knowledge, represent the only clear case in which crustal Li is recycled back into their mantle source, as documented by the isotopically light Li isotopes in Lesser Antilles sediments that feed into the fore arc subduction trench. By corollary, the mantle-like Li signal in global arc lavas is likely the result of broadly similar Li isotopic compositions between the upper mantle and bulk subducting sediments in most arcs. My PhD project on Li diffusion mechanism in zircon is being carried out in extensive collaboration with multiple institutes and employs analytical, experimental and modeling studies. This ongoing project, finds that REE and Y play an important role in controlling Li diffusion in natural zircons, with Li partially coupling to REE and Y to maintain charge balance. Access to state-of-art instrumentation presented critical opportunities to identify the mechanisms that cause elemental fractionation during laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) analysis. My work here elucidates the elemental fractionation associated with plasma plume condensation during laser ablation and particle-ion conversion in the ICP.
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.
Resumo:
Integration of multiple herbicide-resistant genes (trait stacking) into crop plants would allow over the top application of herbicides that are otherwise fatal to crops. The US has just approved Bollgard II® XtendFlex™ cotton which has dicamba, glyphosate and glufosinate resistance traits stacked. The pace of glyphosate resistance evolution is expected to be slowed by this technology. In addition, over the top application of two more herbicides may help to manage hard to kill weeds in cotton such as flax leaf fleabane and milk thistle. However, there are some issues that need to be considered prior to the adoption of this technology. Wherever herbicide tolerant technology is adopted, volunteer crops can emerge as a weed problem, as can herbicide resistant weeds. For cotton, seed movement is the most likely way for resistant traits to move around. Management of multiple stack volunteers may add additional complexity to volunteer management in cotton fields and along roadsides. This paper attempts to evaluate the pros and cons of trait stacking technology by analysing the available literature in other crop growing regions across the world. The efficacy of dicamba and glufosinate on common weeds of the Australian cotton system, herbicide resistance evolution, synergy and antagonisms due to herbicide mixtures, drift hazards and the evolution of herbicide resistance to glyphosate, glufosinate and dicamba were analysed based on the available literature.
Resumo:
The rumen is home to a diverse population of microorganisms encompassing all three domains of life: Bacteria, Archaea, and Eukarya. Viruses have also been documented to be present in large numbers; however, little is currently known about their role in the dynamics of the rumen ecosystem. This research aimed to use a comparative genomics approach in order to assess the potential evolutionary mechanisms at work in the rumen environment. We proposed to do this by first assessing the diversity and potential for horizontal gene transfer (HGT) of multiple strains of the cellulolytic rumen bacterium, Ruminococcus flavefaciens, and then by conducting a survey of rumen viral metagenome (virome) and subsequent comparison of the virome and microbiome sequences to ascertain if there was genetic information shared between these populations. We hypothesize that the bacteriophages play an integral role in the community dynamics of the rumen, as well as driving the evolution of the rumen microbiome through HGT. In our analysis of the Ruminococcus flavefaciens genomes, there were several mobile elements and clustered regularly interspaced short palindromic repeat (CRISPR) sequences detected, both of which indicate interactions with bacteriophages. The rumen virome sequences revealed a great deal of diversity in the viral populations. Additionally, the microbial and viral populations appeared to be closely associated; the dominant viral types were those that infect the dominant microbial phyla. The correlation between the distribution of taxa in the microbiome and virome sequences as well as the presence of CRISPR loci in the R. flavefaciens genomes, suggested that there is a “kill-the-winner” community dynamic between the viral and microbial populations in the rumen. Additionally, upon comparison of the rumen microbiome and rumen virome sequences, we found that there are many sequence similarities between these populations indicating a potential for phage-mediated HGT. These results suggest that the phages represent a gene pool in the rumen that could potentially contain genes that are important for adaptation and survival in the rumen environment, as well as serving as a molecular ‘fingerprint’ of the rumen ecosystem.
Resumo:
Chemotaxis, the phenomenon in which cells move in response to extracellular chemical gradients, plays a prominent role in the mammalian immune response. During this process, a number of chemical signals, called chemoattractants, are produced at or proximal to sites of infection and diffuse into the surrounding tissue. Immune cells sense these chemoattractants and move in the direction where their concentration is greatest, thereby locating the source of attractants and their associated targets. Leading the assault against new infections is a specialized class of leukocytes (white blood cells) known as neutrophils, which normally circulate in the bloodstream. Upon activation, these cells emigrate out of the vasculature and navigate through interstitial tissues toward target sites. There they phagocytose bacteria and release a number of proteases and reactive oxygen intermediates with antimicrobial activity. Neutrophils recruited by infected tissue in vivo are likely confronted by complex chemical environments consisting of a number of different chemoattractant species. These signals may include end target chemicals produced in the vicinity of the infectious agents, and endogenous chemicals released by local host tissues during the inflammatory response. To successfully locate their pathogenic targets within these chemically diverse and heterogeneous settings, activated neutrophils must be capable of distinguishing between the different signals and employing some sort of logic to prioritize among them. This ability to simultaneously process and interpret mulitple signals is thought to be essential for efficient navigation of the cells to target areas. In particular, aberrant cell signaling and defects in this functionality are known to contribute to medical conditions such as chronic inflammation, asthma and rheumatoid arthritis. To elucidate the biomolecular mechanisms underlying the neutrophil response to different chemoattractants, a number of efforts have been made toward understanding how cells respond to different combinations of chemicals. Most notably, recent investigations have shown that in the presence of both end target and endogenous chemoattractant variants, the cells migrate preferentially toward the former type, even in very low relative concentrations of the latter. Interestingly, however, when the cells are exposed to two different endogenous chemical species, they exhibit a combinatorial response in which distant sources are favored over proximal sources. Some additional results also suggest that cells located between two endogenous chemoattractant sources will respond to the vectorial sum of the combined gradients. In the long run, this peculiar behavior could result in oscillatory cell trajectories between the two sources. To further explore the significance of these and other observations, particularly in the context of physiological conditions, we introduce in this work a simplified phenomenological model of neutrophil chemotaxis. In particular, this model incorporates a trait commonly known as directional persistence - the tendency for migrating neutrophils to continue moving in the same direction (much like momentum) - while also accounting for the dose-response characteristics of cells to different chemical species. Simulations based on this model suggest that the efficiency of cell migration in complex chemical environments depends significantly on the degree of directional persistence. In particular, with appropriate values for this parameter, cells can improve their odds of locating end targets by drifting through a network of attractant sources in a loosely-guided fashion. This corroborates the prediction that neutrophils randomly migrate from one chemoattractant source to the next while searching for their end targets. These cells may thus use persistence as a general mechanism to avoid being trapped near sources of endogenous chemoattractants - the mathematical analogue of local maxima in a global optimization problem. Moreover, this general foraging strategy may apply to other biological processes involving multiple signals and long-range navigation.
Resumo:
Nowadays there is a huge evolution in the technological world and in the wireless networks. The electronic devices have more capabilities and resources over the years, which makes the users more and more demanding. The necessity of being connected to the global world leads to the arising of wireless access points in the cities to provide internet access to the people in order to keep the constant interaction with the world. Vehicular networks arise to support safety related applications and to improve the traffic flow in the roads; however, nowadays they are also used to provide entertainment to the users present in the vehicles. The best way to increase the utilization of the vehicular networks is to give to the users what they want: a constant connection to the internet. Despite of all the advances in the vehicular networks, there were several issues to be solved. The presence of dedicated infrastructure to vehicular networks is not wide yet, which leads to the need of using the available Wi-Fi hotspots and the cellular networks as access networks. In order to make all the management of the mobility process and to keep the user’s connection and session active, a mobility protocol is needed. Taking into account the huge number of access points present at the range of a vehicle for example in a city, it will be beneficial to take advantage of all available resources in order to improve all the vehicular network, either to the users and to the operators. The concept of multihoming allows to take advantage of all available resources with multiple simultaneous connections. This dissertation has as objectives the integration of a mobility protocol, the Network-Proxy Mobile IPv6 protocol, with a host-multihoming per packet solution in order to increase the performance of the network by using more resources simultaneously, the support of multi-hop communications, either in IPv6 or IPv4, the capability of providing internet access to the users of the network, and the integration of the developed protocol in the vehicular environment, with the WAVE, Wi-Fi and cellular technologies. The performed tests focused on the multihoming features implemented on this dissertation, and on the IPv4 network access for the normal users. The obtained results show that the multihoming addition to the mobility protocol improves the network performance and provides a better resource management. Also, the results show the correct operation of the developed protocol in a vehicular environment.
Resumo:
This research explores the business model (BM) evolution process of entrepreneurial companies and investigates the relationship between BM evolution and firm performance. Recently, it has been increasingly recognised that the innovative design (and re-design) of BMs is crucial to the performance of entrepreneurial firms, as BM can be associated with superior value creation and competitive advantage. However, there has been limited theoretical and empirical evidence in relation to the micro-mechanisms behind the BM evolution process and the entrepreneurial outcomes of BM evolution. This research seeks to fill this gap by opening up the ‘black box’ of the BM evolution process, exploring the micro-patterns that facilitate the continuous shaping, changing, and renewing of BMs and examining how BM evolutions create and capture value in a dynamic manner. Drawing together the BM and strategic entrepreneurship literature, this research seeks to understand: (1) how and why companies introduce BM innovations and imitations; (2) how BM innovations and imitations interplay as patterns in the BM evolution process; and (3) how BM evolution patterns affect firm performances. This research adopts a longitudinal multiple case study design that focuses on the emerging phenomenon of BM evolution. Twelve entrepreneurial firms in the Chinese Online Group Buying (OGB) industry were selected for their continuous and intensive developments of BMs and their varying success rates in this highly competitive market. Two rounds of data collection were carried out between 2013 and 2014, which generates 31 interviews with founders/co-founders and in total 5,034 pages of data. Following a three-stage research framework, the data analysis begins by mapping the BM evolution process of the twelve companies and classifying the changes in the BMs into innovations and imitations. The second stage focuses down to the BM level, which addresses the BM evolution as a dynamic process by exploring how BM innovations and imitations unfold and interplay over time. The final stage focuses on the firm level, providing theoretical explanations as to the effects of BM evolution patterns on firm performance. This research provides new insights into the nature of BM evolution by elaborating on the missing link between BM dynamics and firm performance. The findings identify four patterns of BM evolution that have different effects on a firm’s short- and long-term performance. This research contributes to the BM literature by presenting what the BM evolution process actually looks like. Moreover, it takes a step towards the process theory of the interplay between BM innovations and imitations, which addresses the role of companies’ actions, and more importantly, reactions to the competitors. Insights are also given into how entrepreneurial companies achieve and sustain value creation and capture by successfully combining the BM evolution patterns. Finally, the findings on BM evolution contributes to the strategic entrepreneurship literature by increasing the understanding of how companies compete in a more dynamic and complex environment. It reveals that, the achievement of superior firm performance is more than a simple question of whether to innovate or imitate, but rather an integration of innovation and imitation strategies over time. This study concludes with a discussion of the findings and their implications for theory and practice.
Resumo:
International audience
Resumo:
The quality and the speed for genome sequencing has advanced at the same time that technology boundaries are stretched. This advancement has been divided so far in three generations. The first-generation methods enabled sequencing of clonal DNA populations. The second-generation massively increased throughput by parallelizing many reactions while the third-generation methods allow direct sequencing of single DNA molecules. The first techniques to sequence DNA were not developed until the mid-1970s, when two distinct sequencing methods were developed almost simultaneously, one by Alan Maxam and Walter Gilbert, and the other one by Frederick Sanger. The first one is a chemical method to cleave DNA at specific points and the second one uses ddNTPs, which synthesizes a copy from the DNA chain template. Nevertheless, both methods generate fragments of varying lengths that are further electrophoresed. Moreover, it is important to say that until the 1990s, the sequencing of DNA was relatively expensive and it was seen as a long process. Besides, using radiolabeled nucleotides also compounded the problem through safety concerns and prevented the automation. Some advancements within the first generation include the replacement of radioactive labels by fluorescent labeled ddNTPs and cycle sequencing with thermostable DNA polymerase, which allows automation and signal amplification, making the process cheaper, safer and faster. Another method is Pyrosequencing, which is based on the “sequencing by synthesis” principle. It differs from Sanger sequencing, in that it relies on the detection of pyrophosphate release on nucleotide incorporation. By the end of the last millennia, parallelization of this method started the Next Generation Sequencing (NGS) with 454 as the first of many methods that can process multiple samples, calling it the 2º generation sequencing. Here electrophoresis was completely eliminated. One of the methods that is sometimes used is SOLiD, based on sequencing by ligation of fluorescently dye-labeled di-base probes which competes to ligate to the sequencing primer. Specificity of the di-base probe is achieved by interrogating every 1st and 2nd base in each ligation reaction. The widely used Solexa/Illumina method uses modified dNTPs containing so called “reversible terminators” which blocks further polymerization. The terminator also contains a fluorescent label, which can be detected by a camera. Now, the previous step towards the third generation was in charge of Ion Torrent, who developed a technique that is based in a method of “sequencing-by-synthesis”. Its main feature is the detection of hydrogen ions that are released during base incorporation. Likewise, the third generation takes into account nanotechnology advancements for the processing of unique DNA molecules to a real time synthesis sequencing system like PacBio; and finally, the NANOPORE, projected since 1995, also uses Nano-sensors forming channels obtained from bacteria that conducts the sample to a sensor that allows the detection of each nucleotide residue in the DNA strand. The advancements in terms of technology that we have nowadays have been so quick, that it makes wonder: ¿How do we imagine the next generation?
Resumo:
The Li-ion rechargeable battery (LIB) is widely used as an energy storage device, but has significant limitations in battery cycle life and safety. During initial charging, decomposition of the ethylene carbonate (EC)-based electrolytes of the LIB leads to the formation of a passivating layer on the anode known as the solid electrolyte interphase (SEI). The formation of an SEI has great impact on the cycle life and safety of LIB, yet mechanistic aspects of SEI formation are not fully understood. In this dissertation, two surface science model systems have been created under ultra-high vacuum (UHV) to probe the very initial stage of SEI formation at the model carbon anode surfaces of LIB. The first model system, Model System I, is an lithium-carbonate electrolyte/graphite C(0001) system. I have developed a temperature programmed desorption/temperature programmed reaction spectroscopy (TPD/TPRS) instrument as part of my dissertation to study Model System I in quantitative detail. The binding strengths and film growth mechanisms of key electrolyte molecules on model carbon anode surfaces with varying extents of lithiation were measured by TPD. TPRS was further used to track the gases evolved from different reduction products in the early-stage SEI formation. The branching ratio of multiple reaction pathways was quantified for the first time and determined to be 70.% organolithium products vs. 30% inorganic lithium product. The obtained branching ratio provides important information on the distribution of lithium salts that form at the very onset of SEI formation. One of the key reduction products formed from EC in early-stage SEI formation is lithium ethylene dicarbonate (LEDC). Despite intensive studies, the LEDC structure in either the bulk or thin-film (SEI) form is unknown. To enable structural study, pure LEDC was synthesized and subject to synchrotron X-ray diffraction measurements (bulk material) and STM measurements (deposited films). To enable studies of LEDC thin films, Model System II, a lithium ethylene dicarbonate (LEDC)-dimethylformamide (DMF)/Ag(111) system was created by a solution microaerosol deposition technique. Produced films were then imaged by ultra-high vacuum scanning tunneling microscopy (UHV-STM). As a control, the dimethylformamide (DMF)-Ag(111) system was first prepared and its complex 2D phase behavior was mapped out as a function of coverage. The evolution of three distinct monolayer phases of DMF was observed with increasing surface pressure — a 2D gas phase, an ordered DMF phase, and an ordered Ag(DMF)2 complex phase. The addition of LEDC to this mixture, seeded the nucleation of the ordered DMF islands at lower surface pressures (DMF coverages), and was interpreted through nucleation theory. A structural model of the nucleation seed was proposed, and the implication of ionic SEI products, such as LEDC, in early-stage SEI formation was discussed.
Resumo:
The fossiliferous deposits in the coastal plain of the Rio Grande do Sul State, Southern Brazil, have been known since the late XIX century; however, the biostratigraphic and chronostratigraphic context is still poorly understood. The present work describes the results of electron spin resonance (ESR) dating in eleven fossil teeth of three extinct taxa (Toxodon platensis, Stegomastodon waringi and Hippidion principale) collected along Chui Creek and nearshore continental shelf, in an attempt to assess more accurately the ages of the fossils and its deposits. This method is based upon the analysis of paramagnetic defects found in biominerals, produced by ionizing radiation emitted by radioactive elements present in the surrounding sediment and by cosmic rays. Three fossils from Chui Creek, collected from the same stratigraphic horizon, exhibit ages between (42 +/- 3) Ka and (34 +/- 7) Ka, using the Combination Uptake model for radioisotopes uptake, while a incisor of Toxodon platensis collected from a stratigraphic level below is much older. Fossils from the shelf have ages ranging from (7 +/- 1) 10(5) Ka to (18 +/- 3) Ka, indicating the mixing of fossils of different epochs. The origin of the submarine fossiliferous deposits seems to be the result of multiple reworking and redeposition cycles by sea-level changes caused by the glacial-interglacial cycles during the Quaternary. The ages indicate that the fossiliferous outcrops at Chui Creek are much younger than previously thought, and that the fossiliferous deposits from the continental shelf encompass Ensenadan to late Lujanian ages (middle to late Pleistocene). (C) 2009 Elsevier Ltd and INQUA. All rights reserved.
Resumo:
Part 5: Service Orientation in Collaborative Networks
Resumo:
Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.