816 resultados para Adaptive Modelling, Entropy Evolution, Sustainable Design


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The hypothesis that ornaments can honestly signal quality only if their expression is condition-dependent has dominated the study of the evolution and function of colour traits. Much less interest has been devoted to the adaptive function of colour traits for which the expression is not, or is to a low extent, sensitive to body condition and the environment in which individuals live. The aim of the present paper is to review the current theoretical and empirical knowledge of the evolution, maintenance and adaptive function of colour plumage traits for which the expression is mainly under genetic control. The finding that in many bird species the inheritance of colour morphs follows the laws of Mendel indicates that genetic colour polymorphism is frequent. Polymorphism may have evolved or be maintained because each colour morph facilitates the exploitation of alternative ecological niches as suggested by the observation that individuals are not randomly distributed among habitats with respect to coloration. Consistent with the hypothesis that different colour morphs are linked to alternative strategies is the finding that in a majority of species polymorphism is associated with reproductive parameters, and behavioural, life-history and physiological traits. Experimental studies showed that such covariations can have a genetic basis. These observations suggest that colour polymorphism has an adaptive function. Aviary and field experiments demonstrated that colour polymorphism is used as a criterion in mate-choice decisions and dominance interactions confirming the claim that conspecifics assess each other's colour morphs. The factors favouring the evolution and maintenance of genetic variation in coloration are reviewed, but empirical data are virtually lacking to assess their importance. Although current theory predicts that only condition-dependent traits can signal quality, the present review shows that genetically inherited morphs can reveal the same qualities. The study of genetic colour polymorphism will provide important and original insights on the adaptive function of conspicuous traits.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Understanding the factors that shape adaptive genetic variation across species niches has become of paramount importance in evolutionary ecology, especially to understand how adaptation to changing climate affects the geographic range of species. The distribution of adaptive alleles in the ecological niche is determined by the emergence of novel mutations, their fitness consequences and gene flow that connects populations across species niches. Striking demographical differences and source sink dynamics of populations between the centre and the margin of the niche can play a major role in the emergence and spread of adaptive alleles. Although some theoretical predictions have long been proposed, the origin and distribution of adaptive alleles within species niches remain untested. In this paper, we propose and discuss a novel empirical approach that combines landscape genetics with species niche modelling, to test whether alleles that confer local adaptation are more likely to occur in either marginal or central populations of species niches. We illustrate this new approach by using a published data set of 21 alpine plant species genotyped with a total of 2483 amplified fragment length polymorphisms (AFLP), distributed over more than 1733 sampling sites across the Alps. Based on the assumption that alleles that were statistically associated with environmental variables were adaptive, we found that adaptive alleles in the margin of a species niche were also present in the niche centre, which suggests that adaptation originates in the niche centre. These findings corroborate models of species range evolution, in which the centre of the niche contributes to the emergence of novel adaptive alleles, which diffuse towards niche margins and facilitate niche and range expansion through subsequent local adaptation. Although these results need to be confirmed via fitness measurements in natural populations and functionally characterised genetic sequences, this study provides a first step towards understanding how adaptive genetic variation emerges and shapes species niches and geographic ranges along environmental gradients.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The genome of the bladderwort Utricularia gibba provides an unparalleled opportunity to uncover the adaptive landscape of an aquatic carnivorous plant with unique phenotypic features such as absence of roots, development of water-filled suction bladders, and a highly ramified branching pattern. Despite its tiny size, the U. gibba genome accommodates approximately as many genes as other plant genomes. To examine the relationship between the compactness of its genome and gene turnover, we compared the U. gibba genome with that of four other eudicot species, defining a total of 17,324 gene families (orthogroups). These families were further classified as either 1) lineage-specific expanded/contracted or 2) stable in size. The U. gibba-expanded families are generically related to three main phenotypic features: 1) trap physiology, 2) key plant morphogenetic/developmental pathways, and 3) response to environmental stimuli, including adaptations to life in aquatic environments. Further scans for signatures of protein functional specialization permitted identification of seven candidate genes with amino acid changes putatively fixed by positive Darwinian selection in the U. gibba lineage. The Arabidopsis orthologs of these genes (AXR, UMAMIT41, IGS, TAR2, SOL1, DEG9, and DEG10) are involved in diverse plant biological functions potentially relevant for U. gibba phenotypic diversification, including 1) auxin metabolism and signal transduction, 2) flowering induction and floral meristem transition, 3) root development, and 4) peptidases. Taken together, our results suggest numerous candidate genes and gene families as interesting targets for further experimental confirmation of their functional and adaptive roles in the U. gibba's unique lifestyle and highly specialized body plan.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In 1859, Charles Darwin published his theory of evolution by natural selection, the process occurring based on fitness benefits and fitness costs at the individual level. Traditionally, evolution has been investigated by biologists, but it has induced mathematical approaches, too. For example, adaptive dynamics has proven to be a very applicable framework to the purpose. Its core concept is the invasion fitness, the sign of which tells whether a mutant phenotype can invade the prevalent phenotype. In this thesis, four real-world applications on evolutionary questions are provided. Inspiration for the first two studies arose from a cold-adapted species, American pika. First, it is studied how the global climate change may affect the evolution of dispersal and viability of pika metapopulations. Based on the results gained here, it is shown that the evolution of dispersal can result in extinction and indeed, evolution of dispersalshould be incorporated into the viability analysis of species living in fragmented habitats. The second study is focused on the evolution of densitydependent dispersal in metapopulations with small habitat patches. It resulted a very surprising unintuitive evolutionary phenomenon, how a non-monotone density-dependent dispersal may evolve. Cooperation is surprisingly common in many levels of life, despite of its obvious vulnerability to selfish cheating. This motivated two applications. First, it is shown that density-dependent cooperative investment can evolve to have a qualitatively different, monotone or non-monotone, form depending on modelling details. The last study investigates the evolution of investing into two public-goods resources. The results suggest one general path by which labour division can arise via evolutionary branching. In addition to applications, two novel methodological derivations of fitness measures in structured metapopulations are given.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sustainability and recycling are core values in today’s industrial operations. New materials, products and processes need to be designed in such a way as to consume fewer of the diminishing resources we have available and to put as little strain on the environment as possible. An integral part of this is cleaning and recycling. New processes are to be designed to improve the efficiency in this aspect. Wastewater, including municipal wastewaters, is treated in several steps including chemical and mechanical cleaning of waters. Well-cleaned water can be recycled and reused. Clean water for everyone is one of the greatest challenges we are facing today. Ferric sulphate, made by oxidation from ferrous sulphate, is used in water purification. The oxidation of ferrous sulphate, FeSO4, to ferric sulphate in acidic aqueous solutions of H2SO4 over finely dispersed active carbon particles was studied in a vigorously stirred batch reactor. Molecular oxygen was used as the oxidation agent and several catalysts were screened: active carbon, active carbon impregnated with Pt, Rh, Pd and Ru. Both active carbon and noble metal-active carbon catalysts enhanced the oxidation rate considerably. The order of the noble metals according to the effect was: Pt >> Rh > Pd, Ru. By the use of catalysts, the production capacities of existing oxidation units can be considerably increased. Good coagulants have a high charge on a long polymer chain effectively capturing dirty particles of the opposite charge. Analysis of the reaction product indicated that it is possible to obtain polymeric iron-based products with good coagulation properties. Systematic kinetic experiments were carried out at the temperature and pressure ranges of 60B100°C and 4B10 bar, respectively. The results revealed that both non-catalytic and catalytic oxidation of Fe2+ to Fe3+ take place simultaneously. The experimental data were fitted to rate equations, which were based on a plausible reaction mechanism: adsorption of dissolved oxygen on active carbon, electron transfer from Fe2+ ions to adsorbed oxygen and formation of surface hydroxyls. A comparison of the Fe2+ concentrations predicted by the kinetic model with the experimentally observed concentrations indicated that the mechanistic rate equations were able to describe the intrinsic oxidation kinetics of Fe2+ over active carbon and active carbon-noble metal catalysts. Engineering aspects were closely considered and effort was directed to utilizing existing equipment in the production of the new coagulant. Ferrous sulphate can be catalytically oxidized to produce a novel long-chained polymeric iron-based flocculent in an easy and affordable way in existing facilities. The results can be used for modelling the reactors and for scale-up. Ferric iron (Fe3+) was successfully applied for the dissolution of sphalerite. Sphalerite contains indium, gallium and germanium, among others, and the application can promote their recovery. The understanding of the reduction process of ferric to ferrous iron can be used to develop further the understanding of the dissolution mechanisms and oxidation of ferrous sulphate. Indium, gallium and germanium face an ever-increasing demand in the electronics industry, among others. The supply is, however, very limited. The fact that most part of the material is obtained through secondary production means that real production quota depends on the primary material production. This also sets the pricing. The primary production material is in most cases zinc and aluminium. Recycling of scrap material and the utilization of industrial waste, containing indium, gallium and geranium, is a necessity without real options. As a part of this study plausible methods for the recovery of indium, gallium and germanium have been studied. The results were encouraging and provided information about the precipitation of these valuables from highly acidic solutions. Indium and gallium were separated from acidic sulphuric acid solutions by precipitation with basic sulphates such as alunite or they were precipitated as basic sulphates of their own as galliunite and indiunite. Germanium may precipitate as a basic sulphate of a mixed composition. The precipitation is rapid and the selectivity is good. When the solutions contain both indium and gallium then the results show that gallium should be separated before indium to achieve a better selectivity. Germanium was separated from highly acidic sulphuric acid solutions containing other metals as well by precipitating with tannic acid. This is a highly selective method. According to the study other commonly found metals in the solution do not affect germanium precipitation. The reduction of ferric iron to ferrous, the precipitation of indium, gallium and germanium, and the dissolution of the raw materials are strongly depending on temperature and pH. The temperature and pH effect were studied and which contributed to the understanding and design of the different process steps. Increased temperature and reduced pH improve the reduction rate. Finally, the gained understanding in the studied areas can be employed to develop better industrial processes not only on a large scale but also increasingly on a smaller scale. The small amounts of indium, gallium and germanium may favour smaller and more locally bound recovery.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this research is to develop a tool that could allow to organize coopetitional relationships between organizations on the basis of two-sided Internet platform. The main result of current master thesis is a detailed description of the concept of the lead generating internet platform-based coopetition. With the tools of agent-based modelling and simulation, there were obtained results that could be used as a base for suggestion that the developed concept is able to cause a positive effect on some particular industries (e.g. web-design studios market) and potentially can bring some benefits and extra profitability for most companies that operate on this particular industry. Also on the basis of the results it can be assumed that the developed instrument is also able to increase the degree of transparency of the market to which it is applied.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Les logiciels sont en constante évolution, nécessitant une maintenance et un développement continus. Ils subissent des changements tout au long de leur vie, que ce soit pendant l'ajout de nouvelles fonctionnalités ou la correction de bogues dans le code. Lorsque ces logiciels évoluent, leurs architectures ont tendance à se dégrader avec le temps et deviennent moins adaptables aux nouvelles spécifications des utilisateurs. Elles deviennent plus complexes et plus difficiles à maintenir. Dans certains cas, les développeurs préfèrent refaire la conception de ces architectures à partir du zéro plutôt que de prolonger la durée de leurs vies, ce qui engendre une augmentation importante des coûts de développement et de maintenance. Par conséquent, les développeurs doivent comprendre les facteurs qui conduisent à la dégradation des architectures, pour prendre des mesures proactives qui facilitent les futurs changements et ralentissent leur dégradation. La dégradation des architectures se produit lorsque des développeurs qui ne comprennent pas la conception originale du logiciel apportent des changements au logiciel. D'une part, faire des changements sans comprendre leurs impacts peut conduire à l'introduction de bogues et à la retraite prématurée du logiciel. D'autre part, les développeurs qui manquent de connaissances et–ou d'expérience dans la résolution d'un problème de conception peuvent introduire des défauts de conception. Ces défauts ont pour conséquence de rendre les logiciels plus difficiles à maintenir et évoluer. Par conséquent, les développeurs ont besoin de mécanismes pour comprendre l'impact d'un changement sur le reste du logiciel et d'outils pour détecter les défauts de conception afin de les corriger. Dans le cadre de cette thèse, nous proposons trois principales contributions. La première contribution concerne l'évaluation de la dégradation des architectures logicielles. Cette évaluation consiste à utiliser une technique d’appariement de diagrammes, tels que les diagrammes de classes, pour identifier les changements structurels entre plusieurs versions d'une architecture logicielle. Cette étape nécessite l'identification des renommages de classes. Par conséquent, la première étape de notre approche consiste à identifier les renommages de classes durant l'évolution de l'architecture logicielle. Ensuite, la deuxième étape consiste à faire l'appariement de plusieurs versions d'une architecture pour identifier ses parties stables et celles qui sont en dégradation. Nous proposons des algorithmes de bit-vecteur et de clustering pour analyser la correspondance entre plusieurs versions d'une architecture. La troisième étape consiste à mesurer la dégradation de l'architecture durant l'évolution du logiciel. Nous proposons un ensemble de m´etriques sur les parties stables du logiciel, pour évaluer cette dégradation. La deuxième contribution est liée à l'analyse de l'impact des changements dans un logiciel. Dans ce contexte, nous présentons une nouvelle métaphore inspirée de la séismologie pour identifier l'impact des changements. Notre approche considère un changement à une classe comme un tremblement de terre qui se propage dans le logiciel à travers une longue chaîne de classes intermédiaires. Notre approche combine l'analyse de dépendances structurelles des classes et l'analyse de leur historique (les relations de co-changement) afin de mesurer l'ampleur de la propagation du changement dans le logiciel, i.e., comment un changement se propage à partir de la classe modifiée è d'autres classes du logiciel. La troisième contribution concerne la détection des défauts de conception. Nous proposons une métaphore inspirée du système immunitaire naturel. Comme toute créature vivante, la conception de systèmes est exposée aux maladies, qui sont des défauts de conception. Les approches de détection sont des mécanismes de défense pour les conception des systèmes. Un système immunitaire naturel peut détecter des pathogènes similaires avec une bonne précision. Cette bonne précision a inspiré une famille d'algorithmes de classification, appelés systèmes immunitaires artificiels (AIS), que nous utilisions pour détecter les défauts de conception. Les différentes contributions ont été évaluées sur des logiciels libres orientés objets et les résultats obtenus nous permettent de formuler les conclusions suivantes: • Les métriques Tunnel Triplets Metric (TTM) et Common Triplets Metric (CTM), fournissent aux développeurs de bons indices sur la dégradation de l'architecture. La d´ecroissance de TTM indique que la conception originale de l'architecture s’est dégradée. La stabilité de TTM indique la stabilité de la conception originale, ce qui signifie que le système est adapté aux nouvelles spécifications des utilisateurs. • La séismologie est une métaphore intéressante pour l'analyse de l'impact des changements. En effet, les changements se propagent dans les systèmes comme les tremblements de terre. L'impact d'un changement est plus important autour de la classe qui change et diminue progressivement avec la distance à cette classe. Notre approche aide les développeurs à identifier l'impact d'un changement. • Le système immunitaire est une métaphore intéressante pour la détection des défauts de conception. Les résultats des expériences ont montré que la précision et le rappel de notre approche sont comparables ou supérieurs à ceux des approches existantes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most adaptive linearization circuits for the nonlinear amplifier have a feedback loop that returns the output signal oj'tne eunplifier to the lineurizer. The loop delay of the linearizer most be controlled precisely so that the convergence of the linearizer should be assured lot this Letter a delay control circuit is presented. It is a delay lock loop (ULL) with it modified early-lute gate and can he easily applied to a DSP implementation. The proposed DLL circuit is applied to an adaptive linearizer with the use of a polynomial predistorter, and the simulalion for a 16-QAM signal is performed. The simulation results show that the proposed DLL eliminates the delay between the reference input signal and the delayed feedback signal of the linearizing circuit perfectly, so that the predistorter polynomial coefficients converge into the optimum value and a high degree of linearization is achieved

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Resumen tomado de la publicaci??n

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a first approach of Evaluation Engine Architecture (EEA) as proposal to support adaptive integral assessment, in the context of a virtual learning environment. The goal of our research is design an evaluation engine tool to assist in the whole assessment process within the A2UN@ project, linking that tool with the other key elements of a learning design (learning task, learning resources and learning support). The teachers would define the relation between knowledge, competencies, activities, resources and type of assessment. Providing this relation is possible obtain more accurate estimations of student's knowledge for adaptive evaluations and future recommendations. The process is supported by usage of educational standards and specifications and for an integral user modelling