928 resultados para Robotics Computer simulation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a new approach to the mathematical modelling of microbial growth. Our approach differs from familiar Monod type models by considering two phases in the physiological states of the microorganisms and makes use of basic relations from enzyme kinetics. Such an approach may be useful in the modelling and control of biotechnological processes, where microorganisms are used for various biodegradation purposes and are often put under extreme inhibitory conditions. Some computational experiments are performed in support of our modelling approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we demonstrate through computer simulation and experiment a novel subcarrier coding scheme combined with pre-electrical dispersion compensation (pre-EDC) for fiber nonlinearity mitigation in coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. As the frequency spacing in CO-OFDM systems is usually small (tens of MHz), neighbouring subcarriers tend to experience correlated nonlinear distortions after propagation over a fiber link. As a consequence, nonlinearity mitigation can be achieved by encoding and processing neighbouring OFDM subcarriers simultaneously. Herein, we propose to adopt the concept of dual phase conjugated twin wave for CO-OFDM transmission. Simulation and experimental results show that this simple technique combined with 50% pre-EDC can effectively offer up to 1.5 and 0.8 dB performance gains in CO-OFDM systems with BPSK and QPSK modulation formats, respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The European Leonardo da Vinci Transfer of Innovation project “Teacher training to improve attractiveness and quality of management education through the simulation tool ‘Emerald Forest’” which emphases on using the computer simulation tool for increasing attractiveness of teaching and learning in economics is presented in this paper. The observation of using computer systems and especially serious games in education is provided as well. “Education is not the filling of a pail, but the lighting of a fire” - William Butler Yeats

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62F15.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The peculiarities of Roman architecture, town planning, and landscape architecture are visible in many of the empire's remaining cities. However, evaluation of the landscapes; and analysis of the urban fabric, spatial compositions, and the concepts and characteristics of its open spaces are missing for Jerash (Gerasa in antiquity) in Jordan. Those missing elements will be discussed in this work, as an example of an urban arrangement that survived through different civilizations in history.^ To address the characteristics of the exterior spaces in Jerash, a study of the major concepts of planning in Classical Antiquity will be conducted, followed by a comparative analysis of the quality of space and architectural composition in Jerash. Through intensive investigation of data available for the area under study, the historical method used in this paper illustrates the uniqueness of the site's urban morphology and architectural disposition.^ An analysis will be performed to compare the design composition of the landscape, urban fabric, and open space of Jerash as a provincial Roman city with its existing excavated remains. Such an analysis will provide new information about the roles these factors and their relationships played in determining the design layout of the city. Information, such as the relationship between void and solid, space shaping, the ground and ceiling, the composition of city elements, the ancient landscapes, and the relationship between the land and architecture, will be acquired.^ A computer simulation for a portion of the city will be developed to enable researchers, students and citizens interested in Jordan's past to visualize more clearly what the city looked like in its prime. Such a simulation could result in the revival of the old city of Jerash and help promote its tourism. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Clusters are aggregations of atoms or molecules, generally intermediate in size between individual atoms and aggregates that are large enough to be called bulk matter. Clusters can also be called nanoparticles, because their size is on the order of nanometers or tens of nanometers. A new field has begun to take shape called nanostructured materials which takes advantage of these atom clusters. The ultra-small size of building blocks leads to dramatically different properties and it is anticipated that such atomically engineered materials will be able to be tailored to perform as no previous material could.^ The idea of ionized cluster beam (ICB) thin film deposition technique was first proposed by Takagi in 1972. It was based upon using a supersonic jet source to produce, ionize and accelerate beams of atomic clusters onto substrates in a vacuum environment. Conditions for formation of cluster beams suitable for thin film deposition have only recently been established following twenty years of effort. Zinc clusters over 1,000 atoms in average size have been synthesized both in our lab and that of Gspann. More recently, other methods of synthesizing clusters and nanoparticles, using different types of cluster sources, have come under development.^ In this work, we studied different aspects of nanoparticle beams. The work includes refinement of a model of the cluster formation mechanism, development of a new real-time, in situ cluster size measurement method, and study of the use of ICB in the fabrication of semiconductor devices.^ The formation process of the vaporized-metal cluster beam was simulated and investigated using classical nucleation theory and one dimensional gas flow equations. Zinc cluster sizes predicted at the nozzle exit are in good quantitative agreement with experimental results in our laboratory.^ A novel in situ real-time mass, energy and velocity measurement apparatus has been designed, built and tested. This small size time-of-flight mass spectrometer is suitable to be used in our cluster deposition systems and does not suffer from problems related to other methods of cluster size measurement like: requirement for specialized ionizing lasers, inductive electrical or electromagnetic coupling, dependency on the assumption of homogeneous nucleation, limits on the size measurement and non real-time capability. Measured ion energies using the electrostatic energy analyzer are in good accordance with values obtained from computer simulation. The velocity (v) is measured by pulsing the cluster beam and measuring the time of delay between the pulse and analyzer output current. The mass of a particle is calculated from m = (2E/v$\sp2).$ The error in the measured value of background gas mass is on the order of 28% of the mass of one N$\sb2$ molecule which is negligible for the measurement of large size clusters. This resolution in cluster size measurement is very acceptable for our purposes.^ Selective area deposition onto conducting patterns overlying insulating substrates was demonstrated using intense, fully-ionized cluster beams. Parameters influencing the selectivity are ion energy, repelling voltage, the ratio of the conductor to insulator dimension, and substrate thickness. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The physics of self-organization and complexity is manifested on a variety of biological scales, from large ecosystems to the molecular level. Protein molecules exhibit characteristics of complex systems in terms of their structure, dynamics, and function. Proteins have the extraordinary ability to fold to a specific functional three-dimensional shape, starting from a random coil, in a biologically relevant time. How they accomplish this is one of the secrets of life. In this work, theoretical research into understanding this remarkable behavior is discussed. Thermodynamic and statistical mechanical tools are used in order to investigate the protein folding dynamics and stability. Theoretical analyses of the results from computer simulation of the dynamics of a four-helix bundle show that the excluded volume entropic effects are very important in protein dynamics and crucial for protein stability. The dramatic effects of changing the size of sidechains imply that a strategic placement of amino acid residues with a particular size may be an important consideration in protein engineering. Another investigation deals with modeling protein structural transitions as a phase transition. Using finite size scaling theory, the nature of unfolding transition of a four-helix bundle protein was investigated and critical exponents for the transition were calculated for various hydrophobic strengths in the core. It is found that the order of the transition changes from first to higher order as the strength of the hydrophobic interaction in the core region is significantly increased. Finally, a detailed kinetic and thermodynamic analysis was carried out in a model two-helix bundle. The connection between the structural free-energy landscape and folding kinetics was quantified. I show how simple protein engineering, by changing the hydropathy of a small number of amino acids, can enhance protein folding by significantly changing the free energy landscape so that kinetic traps are removed. The results have general applicability in protein engineering as well as understanding the underlying physical mechanisms of protein folding. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness. Evidence-based patient-centered Brief Motivational Interviewing (BMI) interven- tions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary. Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems. To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this research is to analyze different daylighting systems in schools in the city of Natal/RN. Although with the abundantly daylight available locally, there are a scarce and diffuse architectural recommendations relating sky conditions, dimensions of daylight systems, shading, fraction of sky visibility, required illuminance, glare, period of occupation and depth of the lit area. This research explores different selected apertures systems to explore the potential of natural light for each system. The method has divided into three phases: The first phase is the modeling which involves the construction of three-dimensional model of a classroom in Sketchup software 2014, which is featured in follow recommendations presented in the literature to obtain a good quality of environmental comfort in school settings. The second phase is the dynamic performance computer simulation of the light through the Daysim software. The input data are the climate file of 2009 the city of Natal / RN, the classroom volumetry in 3ds format with the assignment of optical properties of each surface, the sensor mapping file and the user load file . The results produced in the simulation are organized in a spreadsheet prepared by Carvalho (2014) to determine the occurrence of useful daylight illuminance (UDI) in the range of 300 to 3000lux and build graphics illuminance curves and contours of UDI to identify the uniformity of distribution light, the need of the minimum level of illuminance and the occurrence of glare.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’épaule est l’articulation la plus mobile et la plus instable du corps humain dû à la faible quantité de contraintes osseuses et au rôle des tissus mous qui lui confèrent au moins une dizaine de degrés de liberté. La mobilité de l’épaule est un facteur de performance dans plusieurs sports. Mais son instabilité engendre des troubles musculo-squelettiques, dont les déchirures de la coiffe des rotateurs sont fréquentes et les plus handicapantes. L’évaluation de l’amplitude articulaire est un indice commun de la fonction de l’épaule, toutefois elle est souvent limitée à quelques mesures planaires pour lesquelles les degrés de liberté varient indépendamment les uns des autres. Ces valeurs utilisées dans les modèles de simulation musculo-squelettiques peuvent amener à des solutions non physiologiques. L’objectif de cette thèse était de développer des outils pour la caractérisation de la mobilité articulaire tri-dimensionnelle de l’épaule, en passant par i) fournir une méthode et son approche expérimentale pour évaluer l’amplitude articulaire tridimensionnelle de l’épaule incluant des interactions entre les degrés de liberté ; ii) proposer une représentation permettant d’interpréter les données tri-dimensionnelles obtenues; iii) présenter des amplitudes articulaires normalisées, iv) implémenter une amplitude articulaire tridimensionnelle au sein d’un modèle de simulation numérique afin de générer des mouvements sportifs optimaux plus réalistes; v) prédire des amplitudes articulaires sécuritaires et vi) des exercices de rééducation sécuritaires pour des patients ayant subi une réparation de la coiffe des rotateurs. i) Seize sujets ont été réalisé séries de mouvements d’amplitudes maximales actifs avec des combinaisons entre les différents degrés de liberté de l’épaule. Un système d’analyse du mouvement couplé à un modèle cinématique du membre supérieur a été utilisé pour estimer les cinématiques articulaires tridimensionnelles. ii) L’ensemble des orientations définies par une séquence de trois angles a été inclus dans un polyèdre non convexe représentant l’espace de mobilité articulaire prenant en compte les interactions entre les degrés de liberté. La combinaison des séries d’élévation et de rotation est recommandée pour évaluer l’amplitude articulaire complète de l’épaule. iii) Un espace de mobilité normalisé a également été défini en englobant les positions atteintes par au moins 50% des sujets et de volume moyen. iv) Cet espace moyen, définissant la mobilité physiologiques, a été utilisé au sein d’un modèle de simulation cinématique utilisé pour optimiser la technique d’un élément acrobatique de lâcher de barres réalisée par des gymnastes. Avec l’utilisation régulière de limites articulaires planaires pour contraindre la mobilité de l’épaule, seulement 17% des solutions optimales sont physiologiques. En plus, d’assurer le réalisme des solutions, notre contrainte articulaire tridimensionnelle n’a pas affecté le coût de calculs de l’optimisation. v) et vi) Les seize participants ont également réalisé des séries d’amplitudes articulaires passives et des exercices de rééducation passifs. La contrainte dans l’ensemble des muscles de la coiffe des rotateurs au cours de ces mouvements a été estimée à l’aide d’un modèle musculo-squelettique reproduisant différents types et tailles de déchirures. Des seuils de contrainte sécuritaires ont été utilisés pour distinguer les amplitudes de mouvements risquées ou non pour l’intégrité de la réparation chirurgicale. Une taille de déchirure plus grande ainsi que les déchirures affectant plusieurs muscles ont contribué à réduire l’espace de mobilité articulaire sécuritaire. Principalement les élévations gléno-humérales inférieures à 38° et supérieures à 65°, ou réalisées avec le bras maintenu en rotation interne engendrent des contraintes excessives pour la plupart des types et des tailles de blessure lors de mouvements d’abduction, de scaption ou de flexion. Cette thèse a développé une représentation innovante de la mobilité de l’épaule, qui tient compte des interactions entre les degrés de liberté. Grâce à cette représentation, l’évaluation clinique pourra être plus exhaustive et donc élargir les possibilités de diagnostiquer les troubles de l’épaule. La simulation de mouvement peut maintenant être plus réaliste. Finalement, nous avons montré l’importance de personnaliser la rééducation des patients en termes d’amplitude articulaire, puisque des exercices passifs de rééducation précoces peuvent contribuer à une re-déchirure à cause d’une contrainte trop importante qu’ils imposent aux tendons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract

The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.

This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.

I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.

Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.

II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.

The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.

In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent theoretical advances predict the existence, deep into the glass phase, of a novel phase transition, the so-called Gardner transition. This transition is associated with the emergence of a complex free energy landscape composed of many marginally stable sub-basins within a glass metabasin. In this study, we explore several methods to detect numerically the Gardner transition in a simple structural glass former, the infinite-range Mari-Kurchan model. The transition point is robustly located from three independent approaches: (i) the divergence of the characteristic relaxation time, (ii) the divergence of the caging susceptibility, and (iii) the abnormal tail in the probability distribution function of cage order parameters. We show that the numerical results are fully consistent with the theoretical expectation. The methods we propose may also be generalized to more realistic numerical models as well as to experimental systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lactase persistence, the ability to digest the milk sugar lactose in adulthood, is highly associated with a T allele situated 13,910 bp upstream from the actual lactase gene in Europeans. The frequency of this allele rose rapidly in Europe after transition from hunter–gatherer to agriculturalist lifestyles and the introduction of milkable domestic species from Anatolia some 8000 years ago. Here we first introduce the archaeological and historic background of early farming life in Europe, then summarize what is known of the physiological and genetic mechanisms of lactase persistence. Finally, we compile the evidence for a co-evolutionary process between dairying culture and lactase persistence. We describe the different hypotheses on how this allele spread over Europe and the main evolutionary forces shaping this process. We also summarize three different computer simulation approaches, which offer a means of developing a coherent and integrated understanding of the process of spread of lactase persistence and dairying.