932 resultados para Markov chains. Convergence. Evolutionary Strategy. Large Deviations


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flowers of Annonaceae are characterized by fleshy petals, many stamens with hard connective shields and numerous carpels with sessile stigmas often covered by sticky secretions. The petals of many representatives during anthesis form a closed pollination chamber. Protogynous dichogamy with strong scent emissions especially during the pistillate stage is a character of nearly all species. Scent emissions can be enhanced by thermogenesis. The prevailing reproductive system in the family seems to be self-compatibility. The basal genus Anaxagorea besides exhibiting several ancestral morphological characters has also many characters which reappear in other genera. Strong fruit-like scents consisting of fruit-esters and alcohols mainly attract small fruit-beetles (genus Colopterus, Nitidulidae) as pollinators, as well as several other beetles (Curculionidae, Chrysomelidae) and fruit-flies (Drosophilidae), which themselves gnaw on the thick petals or their larvae are petal or ovule predators. The flowers and the thick petals are thus a floral brood substrate for the visitors and the thick petals of Anaxagorea have to be interpreted as an antipredator structure. Another function of the closed thick petals is the production of heat by accumulated starch, which enhances scent emission and provides a warm shelter for the attracted beetles. Insight into floral characters and floral ecology of Anaxagorea, the sister group of the rest of the Annonaceae, is particularly important for understanding functional evolution and diversification of the family as a whole. As beetle pollination (cantharophily) is plesiomorphic in Anaxagorea and in Annonaceae, characters associated with beetle pollination appear imprinted in members of the whole family. Pollination by beetles (cantharophily) is the predominant mode of the majority of species worldwide. Examples are given of diurnal representatives (e.g., Guatteria, Duguetia, Annona) which function on the basis of fruit-imitating flowers attracting mainly fruit-inhabiting nitidulid beetles, as well as nocturnal species (e.g., large-flowered Annona and Duguetia species), which additionally to most of the diurnal species exhibit strong flower warming and provide very thick petal tissues for the voracious dynastid scarab beetles (Dynastinae, Scarabaeidae). Further examples will show that a few Annonaceae have adapted in their pollination also to thrips, flies, cockroaches and even bees. Although this non-beetle pollinated species have adapted in flower structure and scent compounds to their respective insects, they still retain some of the specialized cantharophilous characters of their ancestors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diplomityössä esitetään menetelmä populaation monimuotoisuuden mittaamiseen liukulukukoodatuissa evoluutioalgoritmeissa, ja tarkastellaan kokeellisesti sen toimintaa. Evoluutioalgoritmit ovat populaatiopohjaisia menetelmiä, joilla pyritään ratkaisemaan optimointiongelmia. Evoluutioalgoritmeissa populaation monimuotoisuuden hallinta on välttämätöntä, jotta suoritettu haku olisi riittävän luotettavaa ja toisaalta riittävän nopeaa. Monimuotoisuuden mittaaminen on erityisen tarpeellista tutkittaessa evoluutioalgoritmien dynaamista käyttäytymistä. Työssä tarkastellaan haku- ja tavoitefunktioavaruuden monimuotoisuuden mittaamista. Toistaiseksi ei ole ollut olemassa täysin tyydyttäviä monimuotoisuuden mittareita, ja työn tavoitteena on kehittää yleiskäyttöinen menetelmä liukulukukoodattujen evoluutioalgoritmien suhteellisen ja absoluuttisen monimuotoisuuden mittaamiseen hakuavaruudessa. Kehitettyjen mittareiden toimintaa ja käyttökelpoisuutta tarkastellaan kokeellisesti ratkaisemalla optimointiongelmia differentiaalievoluutioalgoritmilla. Toteutettujen mittareiden toiminta perustuu keskihajontojen laskemiseen populaatiosta. Keskihajonnoille suoritetaan skaalaus, joko alkupopulaation tai nykyisen populaation suhteen, riippuen lasketaanko absoluuttista vai suhteellista monimuotoisuutta. Kokeellisessa tarkastelussa havaittiin kehitetyt mittarit toimiviksi ja käyttökelpoisiksi. Tavoitefunktion venyttäminen koordinaattiakseleiden suunnassa ei vaikuta mittarin toimintaan. Myöskään tavoitefunktion kiertäminen koordinaatistossa ei vaikuta mittareiden tuloksiin. Esitetyn menetelmän aikakompleksisuus riippuu lineaarisesti populaation koosta, ja mittarin toiminta on siten nopeaa suuriakin populaatioita käytettäessä. Suhteellinen monimuotoisuus antaa vertailukelpoisia tuloksia riippumatta parametrien lukumäärästä tai populaation koosta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Root-colonizing fluorescent pseudomonads are known for their excellent abilities to protect plants against soil-borne fungal pathogens. Some of these bacteria produce an insecticidal toxin (Fit) suggesting that they may exploit insect hosts as a secondary niche. However, the ecological relevance of insect toxicity and the mechanisms driving the evolution of toxin production remain puzzling. RESULTS: Screening a large collection of plant-associated pseudomonads for insecticidal activity and presence of the Fit toxin revealed that Fit is highly indicative of insecticidal activity and predicts that Pseudomonas protegens and P. chlororaphis are exclusive Fit producers. A comparative evolutionary analysis of Fit toxin-producing Pseudomonas including the insect-pathogenic bacteria Photorhabdus and Xenorhadus, which produce the Fit related Mcf toxin, showed that fit genes are part of a dynamic genomic region with substantial presence/absence polymorphism and local variation in GC base composition. The patchy distribution and phylogenetic incongruence of fit genes indicate that the Fit cluster evolved via horizontal transfer, followed by functional integration of vertically transmitted genes, generating a unique Pseudomonas-specific insect toxin cluster. CONCLUSIONS: Our findings suggest that multiple independent evolutionary events led to formation of at least three versions of the Mcf/Fit toxin highlighting the dynamic nature of insect toxin evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many models proposed to study the evolution of collective action rely on a formalism that represents social interactions as n-player games between individuals adopting discrete actions such as cooperate and defect. Despite the importance of spatial structure in biological collective action, the analysis of n-player games games in spatially structured populations has so far proved elusive. We address this problem by considering mixed strategies and by integrating discrete-action n-player games into the direct fitness approach of social evolution theory. This allows to conveniently identify convergence stable strategies and to capture the effect of population structure by a single structure coefficient, namely, the pairwise (scaled) relatedness among interacting individuals. As an application, we use our mathematical framework to investigate collective action problems associated with the provision of three different kinds of collective goods, paradigmatic of a vast array of helping traits in nature: "public goods" (both providers and shirkers can use the good, e.g., alarm calls), "club goods" (only providers can use the good, e.g., participation in collective hunting), and "charity goods" (only shirkers can use the good, e.g., altruistic sacrifice). We show that relatedness promotes the evolution of collective action in different ways depending on the kind of collective good and its economies of scale. Our findings highlight the importance of explicitly accounting for relatedness, the kind of collective good, and the economies of scale in theoretical and empirical studies of the evolution of collective action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. The understanding of Galaxy evolution can be facilitated by the use of population synthesis models, which allow to test hypotheses on the star formation history, star evolution, as well as chemical and dynamical evolution of the Galaxy. Aims. The new version of the Besanc¸on Galaxy Model (hereafter BGM) aims to provide a more flexible and powerful tool to investigate the Initial Mass Function (IMF) and Star Formation Rate (SFR) of the Galactic disc. Methods. We present a new strategy for the generation of thin disc stars which assumes the IMF, SFR and evolutionary tracks as free parameters. We have updated most of the ingredients for the star count production and, for the first time, binary stars are generated in a consistent way. We keep in this new scheme the local dynamical self-consistency as in Bienayme et al (1987). We then compare simulations from the new model with Tycho-2 data and the local luminosity function, as a first test to verify and constrain the new ingredients. The effects of changing thirteen different ingredients of the model are systematically studied. Results. For the first time, a full sky comparison is performed between BGM and data. This strategy allows to constrain the IMF slope at high masses which is found to be close to 3.0, excluding a shallower slope such as Salpeter"s one. The SFR is found decreasing whatever IMF is assumed. The model is compatible with a local dark matter density of 0.011 M pc−3 implying that there is no compelling evidence for significant amount of dark matter in the disc. While the model is fitted to Tycho2 data, a magnitude limited sample with V<11, we check that it is still consistent with fainter stars. Conclusions. The new model constitutes a new basis for further comparisons with large scale surveys and is being prepared to become a powerful tool for the analysis of the Gaia mission data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Menopause timing has a substantial impact on infertility and risk of disease, including breast cancer, but the underlying mechanisms are poorly understood. We report a dual strategy in ∼70,000 women to identify common and low-frequency protein-coding variation associated with age at natural menopause (ANM). We identified 44 regions with common variants, including two regions harboring additional rare missense alleles of large effect. We found enrichment of signals in or near genes involved in delayed puberty, highlighting the first molecular links between the onset and end of reproductive lifespan. Pathway analyses identified major association with DNA damage response (DDR) genes, including the first common coding variant in BRCA1 associated with any complex trait. Mendelian randomization analyses supported a causal effect of later ANM on breast cancer risk (∼6% increase in risk per year; P = 3 × 10(-14)), likely mediated by prolonged sex hormone exposure rather than DDR mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Key Messages: A fundamental failure of high-risk prevention strategies is their inability to prevent disease in the large part of the population at a relatively small average risk and from which most cases of diseases originate. The development of individual predictive medicine and the widening of high-risk categories for numerous (chronic) conditions lead to the application of pseudo-high-risk prevention strategies. Widening the criteria justifying individual preventive interventions and the related pseudo-high-risk strategies lead to treating, individually, ever healthier and larger strata of the population. The pseudo-high-risk prevention strategies raise similar problems compared with high-risk strategies, however on a larger scale and without any of the benefit of population-based strategies. Some 30 years ago, the strengths and weaknesses of population-based and high-risk prevention strategies were brilliantly delineated by Geoffrey Rose in several seminal publications (Table 1).1,2 His work had major implications not only for epidemiology and public health but also for clinical medicine. In particular, Rose demonstrated the fundamental failure of high-risk prevention strategies, that is, by missing a large number of preventable cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate whether screening for hypertension should start early in life, information on the risk of diseases associated with the level of blood pressure in childhood or adolescence is needed. The study by Leiba et al. that is reported in the current issue of Pediatric Nephrology demonstrates convincingly that hypertensive adolescents are at higher risk of cardiovascular death than normotensive adolescents. Nevertheless, it can be shown that this excess risk is not sufficient to justify a screen-and-treat strategy. Since the large majority of cardiovascular deaths occur among normotensive adolescents, measures for primordial prevention of cardiovascular diseases could have a much larger impact at the population level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The genus Prunus L. is large and economically important. However, phylogenetic relationships within Prunus at low taxonomic level, particularly in the subgenus Amygdalus L. s.l., remain poorly investigated. This paper attempts to document the evolutionary history of Amygdalus s.l. and establishes a temporal framework, by assembling molecular data from conservative and variable molecular markers. The nuclear s6pdh gene in combination with the plastid trnSG spacer are analyzed with bayesian and maximum likelihood methods. Since previous phylogenetic analysis with these markers lacked resolution, we additionally analyzed 13 nuclear SSR loci with the δµ2 distance, followed by an unweighted pair group method using arithmetic averages algorithm. Our phylogenetic analysis with both sequence and SSR loci confirms the split between sections Amygdalus and Persica, comprising almonds and peaches, respectively. This result is in agreement with biogeographic data showing that each of the two sections is naturally distributed on each side of the Central Asian Massif chain. Using coalescent based estimations, divergence times between the two sections strongly varied when considering sequence data only or combined with SSR. The sequence-only based estimate (5 million years ago) was congruent with the Central Asian Massif orogeny and subsequent climate change. Given the low level of differentiation within the two sections using both marker types, the utility of combining microsatellites and data sequences to address phylogenetic relationships at low taxonomic level within Amygdalus is discussed. The recent evolutionary histories of almond and peach are discussed in view of the domestication processes that arose in these two phenotypically-diverging gene pools: almonds and peaches were domesticated from the Amygdalus s.s. and Persica sections, respectively. Such economically important crops may serve as good model to study divergent domestication process in close genetic pool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perceived patient value is often not aligned with the emerging expenses for health care services. In other words, the costs are often supposed as rising faster than the actual value for the patients. This fact is causing major concerns to governments, health plans, and individuals. Attempts to solve the problem have habitually been on the operational effectiveness side: increasing patient volume, minimizing costs, rationing, or closing hospitals, usually resulting in a zero-sum game. Only few approaches come from the strategic positioning side and "competition" among hospitals is still perceived rather as a danger than as a chance to create a positive-sum game and stimulate patient value. In their 2006 book, "Redefining Health Care", the renowned Harvard strategy professor Michael E. Porter and hospital management expert Professor Elizabeth Olmsted Teisberg approach the challenge from the positive-sum perspective: they propose to form Integrated Practice Units (IPUs) and manage hospitals in a modern, patient value oriented way. They argue that creating value-based competition on results should have the same effect on the health care sector like transparency and competition turned other industries with out-dated management models (like recently the inert telecommunication industry) into highly competitive and customer value creating businesses. The objective of this paper is to elaborate Care Delivery Value Chains for Integrated Practice Units in ophthalmic clinics and gather a first feedback from Swiss hospital managers, ophthalmologists, and patients, if such an approach could be a realistic way to improve health care management. First, Porter's definition of competitiveness (distinction between operational effectiveness and strategic positioning) is explained. Then, the Care Delivery Value Chain is introduced as a key element for understanding value-based management, followed by three practice examples for ophthalmic clinics. Finally, recommendations are given how the Care Delivery Value Chain can be managed efficiently and how the obstacles of becoming a patient-oriented organization can be overcome. The conclusion is that increased transparency and value-based competition on results has the potential to change the mindset of hospital managers-which will align patient value with the emerging health care expenses. Early adapters of this management approach will gain a competitive advantage. [Author, p. 6]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this research is creating a performance measurement system for accounting services of a large paper industry company. In this thesis there are compared different performance measurement system and then selected two systems, which are presented and compared more detailed. Performance Prism system is the used framework in this research. Performance Prism using success maps to determining objectives. Model‟s target areas are divided into five groups: stakeholder satisfaction, stakeholder contribution, strategy, processes and capabilities. The measurement system creation began by identifying stakeholders and defining their objectives. Based on the objectives are created success map. Measures are created based on the objectives and success map. Then is defined needed data for measures. In the final measurement system, there are total just over 40 measures. Each measure is defined specific target level and ownership. Number of measures is fairly large, but this is the first version of the measurement system, so the amount is acceptable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the thesis is to enhance understanding of the evolution of convergence. Previous research has shown that the technological interfaces between distinct industries are one of the major sources of new radical cross-industry innovations. Despite the fact that convergence in industry evolution has attracted a substantial managerial interest, the conceptual confusion within the field of convergence exists. Firstly, this study clarifies the convergence phenomenon and its impact to industry evolution. Secondly, the study creates novel patent analysis methods to analyze technological convergence and provide tools for anticipating the early stages of convergence. Overall the study combines the industry evolution perspective and the convergence view of industrial evolution. The theoretical background for the study consists of the industry life cycle theories, technology evolution, and technological trajectories. The study links several important concepts in analyzing industry evolution, technological discontinuities, path-dependency, technological interfaces as a source of industry transformation, and the evolutionary stagesof convergence. Based on reviewing the literature a generic understanding of industry transformation and industrial dynamics was generated. In the convergence studies, the theoretical basis is in the discussion of different convergence types and their impacts on industry evolution, and in anticipating and monitoring the stages of convergence. The study is divided in two parts. The first part gives a general overview, and the second part comprises eight research publications. Our case study is based historically on two very distinct industries of the paper and electronics companies as a test environment to evaluate the importance of emerging business sectors and technological convergence as a source of industry transformation. Both qualitative and quantitative research methodology are utilized. The results of this study reveal that technological convergence and complementary innovations from different fields have significant effect to the emerging new business sector formation. The patent-based indicators in the analysis of technological convergence can be utilized on analyzing technology competition, capability and competence development, knowledge accumulation, knowledge spill-overs, and technology-based industry transformation. The patent-based indicators can provide insights to the future competitive environment. Results and conclusions from empirical part seem not be in conflict with real observations in the industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, I conduct a series of molecular systematic studies on the large phytophagous moth superfamily Noctuoidea (Insecta, Lepidoptera) to clarify deep divergences and evolutionary affinities of the group, based on material from every zoogeographic region of the globe. Noctuoidea are the most speciose radiations of butterflies and moths on earth, comprising about a quarter of all lepidopteran diversity. The general aim of these studies was to apply suitably conservative genetic markers (DNA sequences of mitochondrial—mtDNA—and nuclear gene— nDNA—regions) to reconstruct, as the initial step, a robust skeleton phylogenetic hypothesis for the superfamily, then build up robust phylogenetic frameworks for those circumscribed monophyletic entities (i.e., families), as well as clarifying the internal classification of monophyletic lineages (subfamilies and tribes), to develop an understanding of the major lineages at various taxonomic levels within the superfamily Noctuoidea, and their inter-relationships. The approaches applied included: i) stabilizing a robust family-level classification for the superfamily; ii) resolving the phylogeny of the most speciose radiation of Noctuoidea: the family Erebidae; iii) reconstruction of ancestral feeding behaviors and evolution of the vampire moths (Erebidae, Calpinae); iv) elucidating the evolutionary relationships within the family Nolidae and v) clarifying the basal lineages of Noctuidae sensu stricto. Thus, in this thesis I present a wellresolved molecular phylogenetic hypothesis for higher taxa of Noctuoidea consisting of six strongly supported families: Oenosandridae, Notodontidae, Euteliidae, Erebidae, Nolidae, and Noctuidae. The studies in my thesis highlight the importance of molecular data in systematic and phylogenetic studies, in particular DNA sequences of nuclear genes, and an extensive sampling strategy to include representatives of all known major lineages of entire world fauna of Noctuoidea from every biogeographic region. This is crucial, especially when the model organism is as species-rich, highly diverse, cosmopolitan and heterogeneous as the Noctuoidea, traits that represent obstacles to the use of morphology at this taxonomic level.