937 resultados para STRUCTURE-BASED DRUG DESIGN
Resumo:
Loess is the most important collapsible soil; possibly the only engineering soil in which real collapse occurs. A real collapse involves a diminution in volume - it would be an open metastable packing being reduced to a more closely packed, more stable structure. Metastability is at the heart of the collapsible soils problem. To envisage and to model the collapse process in a metastable medium, knowledge is required about the nature and shape of the particles, the types of packings they assume (real and ideal), and the nature of the collapse process - a packing transition upon a change to the effective stress in a media of double porosity. Particle packing science has made little progress in geoscience discipline - since the initial packing paradigms set by Graton and Fraser (1935) - nevertheless is relatively well-established in the soft matter physics discipline. The collapse process can be represented by mathematical modelling of packing – including the Monte Carlo simulations - but relating representation to process remains difficult. This paper revisits the problem of sudden packing transition from a micro-physico-mechanical viewpoint (i.e. collapse imetan terms of structure-based effective stress). This cross-disciplinary approach helps in generalization on collapsible soils to be made that suggests loess is the only truly collapsible soil, because it is only loess which is so totally influenced by the packing essence of the formation process.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
COSTA, Umberto Souza; MOREIRA, Anamaria Martins; MUSICANTE, Matin A.; SOUZA NETO, Plácido A. JCML: A specification language for the runtime verification of Java Card programs. Science of Computer Programming. [S.l]: [s.n], 2010.
Resumo:
COSTA, Umberto Souza da; MOREIRA, Anamaria Martins; MUSICANTE, Martin A. Specification and Runtime Verification of Java Card Programs. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2009.
Resumo:
Animal-associated microbiotas form complex communities, which are suspected to play crucial functions for their host fitness. However, the biodiversity of these communities, including their differences between host species and individuals, has been scarcely studied, especially in case of skin-associated communities. In addition, the intraindividual variability (i.e. between body parts) has never been assessed to date. The objective of this study was to characterize skin bacterial communities of two teleostean fish species, namely the European seabass (Dicentrarchus labrax) and gilthead seabream (Sparus aurata), using a high-throughput DNA sequencing method. In order to focus on intrinsic factors of host-associated bacterial community variability, individuals of the two species were raised in controlled conditions. Bacterial diversity was assessed using a set of four complementary indices, describing the taxonomic and phylogenetic facets of biodiversity and their respective composition (based on presence/absence data) and structure (based on species relative abundances) components. Variability of bacterial diversity was quantified at the interspecific, interindividual and intraindividual scales. We demonstrated that fish surfaces host highly diverse bacterial communities, whose composition was very different from that of surrounding bacterioplankton. This high total biodiversity of skin-associated communities was supported by the important variability, between host species, individuals and the different body parts (dorsal, anal, pectoral and caudal fins).
Resumo:
In the present study, Korean-English bilingual (KEB) and Korean monolingual (KM) children, between the ages of 8 and 13 years, and KEB adults, ages 18 and older, were examined with one speech perception task, called the Nonsense Syllable Confusion Matrix (NSCM) task (Allen, 2005), and two production tasks, called the Nonsense Syllable Imitation Task (NSIT) and the Nonword Repetition Task (NRT; Dollaghan & Campbell, 1998). The present study examined (a) which English sounds on the NSCM task were identified less well, presumably due to interference from Korean phonology, in bilinguals learning English as a second language (L2) and in monolinguals learning English as a foreign language (FL); (b) which English phonemes on the NSIT were more challenging for bilinguals and monolinguals to produce; (c) whether perception on the NSCM task is related to production on the NSIT, or phonological awareness, as measured by the NRT; and (d) whether perception and production differ in three age-language status groups (i.e., KEB children, KEB adults, and KM children) and in three proficiency subgroups of KEB children (i.e., English-dominant, ED; balanced, BAL; and Korean-dominant, KD). In order to determine English proficiency in each group, language samples were extensively and rigorously analyzed, using software, called Systematic Analysis of Language Transcripts (SALT). Length of samples in complete and intelligible utterances, number of different and total words (NDW and NTW, respectively), speech rate in words per minute (WPM), and number of grammatical errors, mazes, and abandoned utterances were measured and compared among the three initial groups and the three proficiency subgroups. Results of the language sample analysis (LSA) showed significant group differences only between the KEBs and the KM children, but not between the KEB children and adults. Nonetheless, compared to normative means (from a sample length- and age-matched database provided by SALT), the KEB adult group and the KD subgroup produced English at significantly slower speech rates than expected for monolingual, English-speaking counterparts. Two existing models of bilingual speech perception and production—the Speech Learning Model or SLM (Flege, 1987, 1992) and the Perceptual Assimilation Model or PAM (Best, McRoberts, & Sithole, 1988; Best, McRoberts, & Goodell, 2001)—were considered to see if they could account for the perceptual and production patterns evident in the present study. The selected English sounds for stimuli in the NSCM task and the NSIT were 10 consonants, /p, b, k, g, f, θ, s, z, ʧ, ʤ/, and 3 vowels /I, ɛ, æ/, which were used to create 30 nonsense syllables in a consonant-vowel structure. Based on phonetic or phonemic differences between the two languages, English sounds were categorized either as familiar sounds—namely, English sounds that are similar, but not identical, to L1 Korean, including /p, k, s, ʧ, ɛ/—or unfamiliar sounds—namely, English sounds that are new to L1, including /b, g, f, θ, z, ʤ, I, æ/. The results of the NSCM task showed that (a) consonants were perceived correctly more often than vowels, (b) familiar sounds were perceived correctly more often than unfamiliar ones, and (c) familiar consonants were perceived correctly more often than unfamiliar ones across the three age-language status groups and across the three proficiency subgroups; and (d) the KEB children perceived correctly more often than the KEB adults, the KEB children and adults perceived correctly more often than the KM children, and the ED and BAL subgroups perceived correctly more often than the KD subgroup. The results of the NSIT showed (a) consonants were produced more accurately than vowels, and (b) familiar sounds were produced more accurately than unfamiliar ones, across the three age-language status groups. Also, (c) familiar consonants were produced more accurately than unfamiliar ones in the KEB and KM child groups, and (d) unfamiliar vowels were produced more accurately than a familiar one in the KEB child group, but the reverse was true in the KEB adult and KM child groups. The KEB children produced sounds correctly significantly more often than the KM children and the KEB adults, though the percent correct differences were smaller than for perception. Production differences were not found among the three proficiency subgroups. Perception on the NSCM task was compared to production on the NSIT and NRT. Weak positive correlations were found between perception and production (NSIT) for unfamiliar consonants and sounds, whereas a weak negative correlation was found for unfamiliar vowels. Several correlations were significant for perceptual performance on the NSCM task and overall production performance on the NRT: for unfamiliar consonants, unfamiliar vowels, unfamiliar sounds, consonants, vowels, and overall performance on the NSCM task. Nonetheless, no significant correlation was found between production on the NSIT and NRT. Evidently these are two very different production tasks, where immediate imitation of single syllables on the NSIT results in high performance for all groups. Findings of the present study suggest that (a) perception and production of L2 consonants differ from those of vowels; (b) perception and production of L2 sounds involve an interaction of sound type and familiarity; (c) a weak relation exists between perception and production performance for unfamiliar sounds; and (d) L2 experience generally predicts perceptual and production performance. The present study yields several conclusions. The first is that familiarity of sounds is an important influence on L2 learning, as claimed by both SLM and PAM. In the present study, familiar sounds were perceived and produced correctly more often than unfamiliar ones in most cases, in keeping with PAM, though experienced L2 learners (i.e., the KEB children) produced unfamiliar vowels better than familiar ones, in keeping with SLM. Nonetheless, the second conclusion is that neither SLM nor PAM consistently and thoroughly explains the results of the present study. This is because both theories assume that the influence of L1 on the perception of L2 consonants and vowels works in the same way as for production of them. The third and fourth conclusions are two proposed arguments: that perception and production of consonants are different than for vowels, and that sound type interacts with familiarity and L2 experience. These two arguments can best explain the current findings. These findings may help us to develop educational curricula for bilingual individuals listening to and articulating English. Further, the extensive analysis of spontaneous speech in the present study should contribute to the specification of parameters for normal language development and function in Korean-English bilingual children and adults.
Resumo:
Artificial immune systems (AISs) to date have generally been inspired by naive biological metaphors. This has limited the effectiveness of these systems. In this position paper two ways in which AISs could be made more biologically realistic are discussed. We propose that AISs should draw their inspiration from organisms which possess only innate immune systems, and that AISs should employ systemic models of the immune system to structure their overall design. An outline of plant and invertebrate immune systems is presented, and a number of contemporary systemic models are reviewed. The implications for interdisciplinary research that more biologically-realistic AISs could have is also discussed.
Resumo:
International audience
Resumo:
Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to improve accuracy of scoring functions used in most VS methods we propose a hybrid novel approach where neural networks (NNET) and support vector machines (SVM) methods are trained with databases of known active (drugs) and inactive compounds, this information being exploited afterwards to improve VS predictions.
Resumo:
Dissertação de Mestrado, Engenharia Eletrónica e Telecomunicações, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2016
Resumo:
El presente trabajo de titulación tiene como finalidad la evaluación, el diagnóstico y la elaboración de un plan de mejoras que permita optimizar los procesos de depuración de las plantas de tratamiento de aguas residuales de los sectores Pambadel y Zhuringualo del Cantón Girón. En el diagnóstico de las PTAR se realizó la caracterización de los afluentes y efluentes; los valores obtenidos se compararon con la normativa ambiental TULSMA, a fin de evaluar el cumplimiento. Los resultados de laboratorio de los años 2014, 2015 y 2016junto con la valoración in situpermitieron determinar los porcentajes de eficiencia de las depuradoras. Laseficiencias alcanzadas en el 2014 fueron de 70,98% para Pambadel, 69,14% en Zhuringualo, en el año 2015 de -266,94% paraplanta de Pambadel, 66,03% para Zhuringualo, en el 2016 de 40,45% y 71,23% respectivamente. Al momento de comparar con la normativa encontramos incumplimiento en parámetros como: fósforo, coliformes totales y termotolerantes. También se efectuó un análisis social en el cual se encuestó a los pobladores de las zonas de influencia directa de las PTAR con la finalidad de conocer las necesidades y molestias que estas generan. Concluyendo que es necesaria la implementación del plan de mejoras que implica los siguientes procesos de optimización: un programa de mantenimiento emergente y remodelación de infraestructura deteriorada, la implementación de un laboratorio básico, la construcción de un sistema de pretratamiento basándose en los planos de diseño y la realización de estudios técnicos posteriores.
Resumo:
Introduction : La néphro-urétérectomie radicale (NUR) représente le traitement primaire pour les patients atteints d’une tumeur des voies excrétrices supérieures (TVES) non métastatique. Une approche ouverte ou laparoscopique peut être considérée. Malgré la présence de plusieurs études comparant les résultats périopératoires et oncologiques entre ces deux approches, aucunes études se basent sur une cohorte populationnelle. Objectif : Notre but est d’évaluer la morbidité péri-opératoire entre la NUR ouverte et laparoscopique en utilisant une cohorte populationnelle. Méthode : Nous avons utilisé la base de donnée Nationwide Inpatient Sample (NIS) pour identifier tous les patients atteints d’une TVES non métastatique, traités par NUR ouverte ou laparoscopique, entre 1998 et 2009. Au total, 7401 (90,8%) et 754 (9,2%) patients ont subi une NUR ouverte et laparoscopique, respectivement. Dans le but de contrôler les différences inhérentes entre les deux groupes, nous avons utilisé une analyse par appariement sur les scores de propension. Ainsi, 3016 (80%) patients avec NUR ouverte étaient appariés à 754 (20%) patients avec NUR laparoscopique. Intervention : Tous les patients ont subi une NUR. Mesures : Les taux de complications intra-opératoires et post-opératoires, de transfusions sanguines, d’hospitalisation prolongée et de mortalité intrahospitalière ont été mesurés. Des analyses de régression logistique on été utilisées pour notre cohorte, après appariement sur les scores de propension. Résultats et Limitations : Pour les patients traités par approche ouverte vs. laparoscopique, les taux suivants furent calculés : transfusions sanguines : 15 vs. 10% (p<0,001); complications intra-opératoires : 4,7 vs. 2,1% (p=0,002); complications post-opératoires : 17 vs. 15% (p=0,24); durée d’hospitalisation prolongée (≥ 5 jours) : 47 vs. 28% (p<0,001); mortalité intra-hospitalière 1,3 vs. 0,7% (p=0,12). Sur les analyses par régression logistique, les patients ayant été traités par NUR laparoscopique avaient moins de chance de recevoir une transfusion sanguine (odds ratio [OR]: 0,6, p<0,001), de subir une complication intra-opératoire (OR: 0,4, p=0,002), et d’avoir une durée prolongée d’hospitalisation (OR: 0,4, p<0,001). Globalement les taux de complications postopératoires étaient équivalents. Toutefois, l’approche laparoscopique était associée à moins de complications pulmonaires (OR: 0,4, p=0,007). Cette étude est limitée par sa nature rétrospective. Conclusion: Après ajustement de potentiels biais de sélection, la NUR par approche laparoscopique est associée à moins de complications intraopératoires et péri-opératoires comparée à la NUR par approche ouverte.
Resumo:
Liver cancer accounts for nearly 10% of all cancers in the US. Intrahepatic Arterial Radiomicrosphere Therapy (RMT), also known as Selective Internal Radiation Treatment (SIRT), is one of the evolving treatment modalities. Successful patient clinical outcomes require suitable treatment planning followed by delivery of the microspheres for therapy. The production and in vitro evaluation of various polymers (PGCD, CHS and CHSg) microspheres for a RMT and RMT planning are described. Microparticles with a 30±10 µm size distribution were prepared by emulsion method. The in vitro half-life of the particles was determined in PBS buffer and porcine plasma and their potential application (treatment or treatment planning) established. Further, the fast degrading microspheres (≤ 48 hours in vitro half-life) were labeled with 68Ga and/or 99mTc as they are suitable for the imaging component of treatment planning, which is the primary emphasis of this dissertation. Labeling kinetics demonstrated that 68Ga-PGCD, 68Ga-CHSg and 68Ga-NOTA-CHSg can be labeled with more than 95% yield in 15 minutes; 99mTc-PGCD and 99mTc-CHSg can also be labeled with high yield within 15-30 minutes. In vitro stability after four hours was more than 90% in saline and PBS buffer for all of them. Experiments in reconstituted hemoglobin lysate were also performed. Two successful imaging (RMT planning) agents were found: 99mTc-CHSg and 68Ga-NOTA-CHSg. For the 99mTc-PGCD a successful perfusion image was obtained after 10 minutes, however the in vivo degradation was very fast (half-life), releasing the 99mTc from the lungs. Slow degrading CHS microparticles (> 21 days half-life) were modified with p-SCN-b-DOTA and labeled with 90Y for production of 90Y-DOTA-CHS. Radiochemical purity was evaluated in vitro and in vivo showing more than 90% stability after 72 and 24 hours respectively. All agents were compared to their respective gold standards (99mTc-MAA for 68Ga-NOTA-CHSg and 99mTc-CHSg; 90Y-SirTEX for 90Y-DOTA-CHS) showing superior in vivo stability. RMT and RMT planning agents (Therapy, PET and SPECT imaging) were designed and successfully evaluated in vitro and in vivo.
Resumo:
Introduction : La néphro-urétérectomie radicale (NUR) représente le traitement primaire pour les patients atteints d’une tumeur des voies excrétrices supérieures (TVES) non métastatique. Une approche ouverte ou laparoscopique peut être considérée. Malgré la présence de plusieurs études comparant les résultats périopératoires et oncologiques entre ces deux approches, aucunes études se basent sur une cohorte populationnelle. Objectif : Notre but est d’évaluer la morbidité péri-opératoire entre la NUR ouverte et laparoscopique en utilisant une cohorte populationnelle. Méthode : Nous avons utilisé la base de donnée Nationwide Inpatient Sample (NIS) pour identifier tous les patients atteints d’une TVES non métastatique, traités par NUR ouverte ou laparoscopique, entre 1998 et 2009. Au total, 7401 (90,8%) et 754 (9,2%) patients ont subi une NUR ouverte et laparoscopique, respectivement. Dans le but de contrôler les différences inhérentes entre les deux groupes, nous avons utilisé une analyse par appariement sur les scores de propension. Ainsi, 3016 (80%) patients avec NUR ouverte étaient appariés à 754 (20%) patients avec NUR laparoscopique. Intervention : Tous les patients ont subi une NUR. Mesures : Les taux de complications intra-opératoires et post-opératoires, de transfusions sanguines, d’hospitalisation prolongée et de mortalité intrahospitalière ont été mesurés. Des analyses de régression logistique on été utilisées pour notre cohorte, après appariement sur les scores de propension. Résultats et Limitations : Pour les patients traités par approche ouverte vs. laparoscopique, les taux suivants furent calculés : transfusions sanguines : 15 vs. 10% (p<0,001); complications intra-opératoires : 4,7 vs. 2,1% (p=0,002); complications post-opératoires : 17 vs. 15% (p=0,24); durée d’hospitalisation prolongée (≥ 5 jours) : 47 vs. 28% (p<0,001); mortalité intra-hospitalière 1,3 vs. 0,7% (p=0,12). Sur les analyses par régression logistique, les patients ayant été traités par NUR laparoscopique avaient moins de chance de recevoir une transfusion sanguine (odds ratio [OR]: 0,6, p<0,001), de subir une complication intra-opératoire (OR: 0,4, p=0,002), et d’avoir une durée prolongée d’hospitalisation (OR: 0,4, p<0,001). Globalement les taux de complications postopératoires étaient équivalents. Toutefois, l’approche laparoscopique était associée à moins de complications pulmonaires (OR: 0,4, p=0,007). Cette étude est limitée par sa nature rétrospective. Conclusion: Après ajustement de potentiels biais de sélection, la NUR par approche laparoscopique est associée à moins de complications intraopératoires et péri-opératoires comparée à la NUR par approche ouverte.
Resumo:
Les applications Web en général ont connu d’importantes évolutions technologiques au cours des deux dernières décennies et avec elles les habitudes et les attentes de la génération de femmes et d’hommes dite numérique. Paradoxalement à ces bouleversements technologiques et comportementaux, les logiciels d’enseignement et d’apprentissage (LEA) n’ont pas tout à fait suivi la même courbe d’évolution technologique. En effet, leur modèle de conception est demeuré si statique que leur utilité pédagogique est remise en cause par les experts en pédagogie selon lesquels les LEA actuels ne tiennent pas suffisamment compte des aspects théoriques pédagogiques. Mais comment améliorer la prise en compte de ces aspects dans le processus de conception des LEA? Plusieurs approches permettent de concevoir des LEA robustes. Cependant, un intérêt particulier existe pour l’utilisation du concept patron dans ce processus de conception tant par les experts en pédagogie que par les experts en génie logiciel. En effet, ce concept permet de capitaliser l’expérience des experts et permet aussi de simplifier de belle manière le processus de conception et de ce fait son coût. Une comparaison des travaux utilisant des patrons pour concevoir des LEA a montré qu’il n’existe pas de cadre de synergie entre les différents acteurs de l’équipe de conception, les experts en pédagogie d’un côté et les experts en génie logiciel de l’autre. De plus, les cycles de vie proposés dans ces travaux ne sont pas complets, ni rigoureusement décrits afin de permettre de développer des LEA efficients. Enfin, les travaux comparés ne montrent pas comment faire coexister les exigences pédagogiques avec les exigences logicielles. Le concept patron peut-il aider à construire des LEA robustes satisfaisant aux exigences pédagogiques ? Comme solution, cette thèse propose une approche de conception basée sur des patrons pour concevoir des LEA adaptés aux technologies du Web. Plus spécifiquement, l’approche méthodique proposée montre quelles doivent être les étapes séquentielles à prévoir pour concevoir un LEA répondant aux exigences pédagogiques. De plus, un répertoire est présenté et contient 110 patrons recensés et organisés en paquetages. Ces patrons peuvent être facilement retrouvés à l’aide du guide de recherche décrit pour être utilisés dans le processus de conception. L’approche de conception a été validée avec deux exemples d’application, permettant de conclure d’une part que l’approche de conception des LEA est réaliste et d’autre part que les patrons sont bien valides et fonctionnels. L’approche de conception de LEA proposée est originale et se démarque de celles que l’on trouve dans la littérature car elle est entièrement basée sur le concept patron. L’approche permet également de prendre en compte les exigences pédagogiques. Elle est générique car indépendante de toute plateforme logicielle ou matérielle. Toutefois, le processus de traduction des exigences pédagogiques n’est pas encore très intuitif, ni très linéaire. D’autres travaux doivent être réalisés pour compléter les résultats obtenus afin de pouvoir traduire en artéfacts exploitables par les ingénieurs logiciels les exigences pédagogiques les plus complexes et les plus abstraites. Pour la suite de cette thèse, une instanciation des patrons proposés serait intéressante ainsi que la définition d’un métamodèle basé sur des patrons qui pourrait permettre la spécification d’un langage de modélisation typique des LEA. L’ajout de patrons permettant d’ajouter une couche sémantique au niveau des LEA pourrait être envisagée. Cette couche sémantique permettra non seulement d’adapter les scénarios pédagogiques, mais aussi d’automatiser le processus d’adaptation au besoin d’un apprenant en particulier. Il peut être aussi envisagé la transformation des patrons proposés en ontologies pouvant permettre de faciliter l’évaluation des connaissances de l’apprenant, de lui communiquer des informations structurées et utiles pour son apprentissage et correspondant à son besoin d’apprentissage.