960 resultados para equivalent web thickness method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

La description des termes dans les ressources terminologiques traditionnelles se limite à certaines informations, comme le terme (principalement nominal), sa définition et son équivalent dans une langue étrangère. Cette description donne rarement d’autres informations qui peuvent être très utiles pour l’utilisateur, surtout s’il consulte les ressources dans le but d’approfondir ses connaissances dans un domaine de spécialité, maitriser la rédaction professionnelle ou trouver des contextes où le terme recherché est réalisé. Les informations pouvant être utiles dans ce sens comprennent la description de la structure actancielle des termes, des contextes provenant de sources authentiques et l’inclusion d’autres parties du discours comme les verbes. Les verbes et les noms déverbaux, ou les unités terminologiques prédicatives (UTP), souvent ignorés par la terminologie classique, revêtent une grande importance lorsqu’il s’agit d’exprimer une action, un processus ou un évènement. Or, la description de ces unités nécessite un modèle de description terminologique qui rend compte de leurs particularités. Un certain nombre de terminologues (Condamines 1993, Mathieu-Colas 2002, Gross et Mathieu-Colas 2001 et L’Homme 2012, 2015) ont d’ailleurs proposé des modèles de description basés sur différents cadres théoriques. Notre recherche consiste à proposer une méthodologie de description terminologique des UTP de la langue arabe, notamment l’arabe standard moderne (ASM), selon la théorie de la Sémantique des cadres (Frame Semantics) de Fillmore (1976, 1977, 1982, 1985) et son application, le projet FrameNet (Ruppenhofer et al. 2010). Le domaine de spécialité qui nous intéresse est l’informatique. Dans notre recherche, nous nous appuyons sur un corpus recueilli du web et nous nous inspirons d’une ressource terminologique existante, le DiCoInfo (L’Homme 2008), pour compiler notre propre ressource. Nos objectifs se résument comme suit. Premièrement, nous souhaitons jeter les premières bases d’une version en ASM de cette ressource. Cette version a ses propres particularités : 1) nous visons des unités bien spécifiques, à savoir les UTP verbales et déverbales; 2) la méthodologie développée pour la compilation du DiCoInfo original devra être adaptée pour prendre en compte une langue sémitique. Par la suite, nous souhaitons créer une version en cadres de cette ressource, où nous regroupons les UTP dans des cadres sémantiques, en nous inspirant du modèle de FrameNet. À cette ressource, nous ajoutons les UTP anglaises et françaises, puisque cette partie du travail a une portée multilingue. La méthodologie consiste à extraire automatiquement les unités terminologiques verbales et nominales (UTV et UTN), comme Ham~ala (حمل) (télécharger) et taHmiyl (تحميل) (téléchargement). Pour ce faire, nous avons adapté un extracteur automatique existant, TermoStat (Drouin 2004). Ensuite, à l’aide des critères de validation terminologique (L’Homme 2004), nous validons le statut terminologique d’une partie des candidats. Après la validation, nous procédons à la création de fiches terminologiques, à l’aide d’un éditeur XML, pour chaque UTV et UTN retenue. Ces fiches comprennent certains éléments comme la structure actancielle des UTP et jusqu’à vingt contextes annotés. La dernière étape consiste à créer des cadres sémantiques à partir des UTP de l’ASM. Nous associons également des UTP anglaises et françaises en fonction des cadres créés. Cette association a mené à la création d’une ressource terminologique appelée « DiCoInfo : A Framed Version ». Dans cette ressource, les UTP qui partagent les mêmes propriétés sémantiques et structures actancielles sont regroupées dans des cadres sémantiques. Par exemple, le cadre sémantique Product_development regroupe des UTP comme Taw~ara (طور) (développer), to develop et développer. À la suite de ces étapes, nous avons obtenu un total de 106 UTP ASM compilées dans la version en ASM du DiCoInfo et 57 cadres sémantiques associés à ces unités dans la version en cadres du DiCoInfo. Notre recherche montre que l’ASM peut être décrite avec la méthodologie que nous avons mise au point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: Highway bridges have great values in a country because in case of any natural disaster they may serve as lines to save people’s lives. Being vulnerable under significant seismic loads, different methods can be considered to design resistant highway bridges and rehabilitate the existing ones. In this study, base isolation has been considered as one efficient method in this regards which in some cases reduces significantly the seismic load effects on the structure. By reducing the ductility demand on the structure without a notable increase of strength, the structure is designed to remain elastic under seismic loads. The problem associated with the isolated bridges, especially with elastomeric bearings, can be their excessive displacements under service and seismic loads. This can defy the purpose of using elastomeric bearings for small to medium span typical bridges where expansion joints and clearances may result in significant increase of initial and maintenance cost. Thus, supplementing the structure with dampers with some stiffness can serve as a solution which in turn, however, may increase the structure base shear. The main objective of this thesis is to provide a simplified method for the evaluation of optimal parameters for dampers in isolated bridges. Firstly, performing a parametric study, some directions are given for the use of simple isolation devices such as elastomeric bearings to rehabilitate existing bridges with high importance. Parameters like geometry of the bridge, code provisions and the type of soil on which the structure is constructed have been introduced to a typical two span bridge. It is concluded that the stiffness of the substructure, soil type and special provisions in the code can determine the employment of base isolation for retrofitting of bridges. Secondly, based on the elastic response coefficient of isolated bridges, a simplified design method of dampers for seismically isolated regular highway bridges has been presented in this study. By setting objectives for reduction of displacement and base shear variation, the required stiffness and damping of a hysteretic damper can be determined. By modelling a typical two span bridge, numerical analyses have followed to verify the effectiveness of the method. The method has been used to identify equivalent linear parameters and subsequently, nonlinear parameters of hysteretic damper for various designated scenarios of displacement and base shear requirements. Comparison of the results of the nonlinear numerical model without damper and with damper has shown that the method is sufficiently accurate. Finally, an innovative and simple hysteretic steel damper was designed. Five specimens were fabricated from two steel grades and were tested accompanying a real scale elastomeric isolator in the structural laboratory of the Université de Sherbrooke. The test procedure was to characterize the specimens by cyclic displacement controlled tests and subsequently to test them by real-time dynamic substructuring (RTDS) method. The test results were then used to establish a numerical model of the system which went through nonlinear time history analyses under several earthquakes. The outcome of the experimental and numerical showed an acceptable conformity with the simplified method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of research groups are now developing and using finite volume (FV) methods for computational solid mechanics (CSM). These methods are proving to be equivalent and in some cases superior to their finite element (FE) counterparts. In this paper we will describe a vertex-based FV method with arbitrarily structured meshes, for modelling the elasto-plastic deformation of solid materials undergoing small strains in complex geometries. Comparisons with rational FE methods will be given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The business system known as Pyramid does today not provide its user with a reasonable system regarding case management for support issues. The current system in place requires the customer to contact its provider via telephone to register new cases. In addition to this, current system doesn’t include any way for the user to view any of their current cases without contacting the provider.A solution to this issue is to migrate the current case management system from a telephone contact to a web based platform, where customers could easier access their current cases, but also directly through the website create new cases. This new system would reduce the time required to manually manage each individual case, for both customer and provider, resulting in an overall reduction in cost for both parties.The result is a system divided into two different sections, the first one is an API created in Pyramid that acts as a web service, and the second one a website which customers can connect to. The website will allow users to overview their current cases, but also the option to create new cases directly through the site. All the information used to the website is obtained through the web service inside Pyramid. Analyzing the final design of the system, the developers where able to conclude both positive and negative aspects of the systems’ final design. If the platform chosen was the optimal choice or not, and also what can be include if the system is further developed, will be discussed.The development process and the method used during development will also be analyzed and discussed, what positive and negative aspects that where encountered. In addition to this the cause and effect of a development team smaller than the suggested size will also be analyzed. Lastly an analysis of actions that could’ve been made in order to prevent certain issues from occurring will.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is still a lack of an engineering approach for building Web systems, and the field of measuring the Web is not yet mature. In particular, there is an uncertainty in the selection of evaluation methods, and there are risks of standardizing inadequate evaluation practices. It is important to know whether we are evaluating the Web or specific website(s). We need a new categorization system, a different focus on evaluation methods, and an in-depth analysis that reveals the strengths and weaknesses of each method. As a contribution to the field of Web evaluation, this study proposes a novel approach to view and select evaluation methods based on the purpose and platforms of the evaluation. It has been shown that the choice of the appropriate evaluation method(s) depends greatly on the purpose of the evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La description des termes dans les ressources terminologiques traditionnelles se limite à certaines informations, comme le terme (principalement nominal), sa définition et son équivalent dans une langue étrangère. Cette description donne rarement d’autres informations qui peuvent être très utiles pour l’utilisateur, surtout s’il consulte les ressources dans le but d’approfondir ses connaissances dans un domaine de spécialité, maitriser la rédaction professionnelle ou trouver des contextes où le terme recherché est réalisé. Les informations pouvant être utiles dans ce sens comprennent la description de la structure actancielle des termes, des contextes provenant de sources authentiques et l’inclusion d’autres parties du discours comme les verbes. Les verbes et les noms déverbaux, ou les unités terminologiques prédicatives (UTP), souvent ignorés par la terminologie classique, revêtent une grande importance lorsqu’il s’agit d’exprimer une action, un processus ou un évènement. Or, la description de ces unités nécessite un modèle de description terminologique qui rend compte de leurs particularités. Un certain nombre de terminologues (Condamines 1993, Mathieu-Colas 2002, Gross et Mathieu-Colas 2001 et L’Homme 2012, 2015) ont d’ailleurs proposé des modèles de description basés sur différents cadres théoriques. Notre recherche consiste à proposer une méthodologie de description terminologique des UTP de la langue arabe, notamment l’arabe standard moderne (ASM), selon la théorie de la Sémantique des cadres (Frame Semantics) de Fillmore (1976, 1977, 1982, 1985) et son application, le projet FrameNet (Ruppenhofer et al. 2010). Le domaine de spécialité qui nous intéresse est l’informatique. Dans notre recherche, nous nous appuyons sur un corpus recueilli du web et nous nous inspirons d’une ressource terminologique existante, le DiCoInfo (L’Homme 2008), pour compiler notre propre ressource. Nos objectifs se résument comme suit. Premièrement, nous souhaitons jeter les premières bases d’une version en ASM de cette ressource. Cette version a ses propres particularités : 1) nous visons des unités bien spécifiques, à savoir les UTP verbales et déverbales; 2) la méthodologie développée pour la compilation du DiCoInfo original devra être adaptée pour prendre en compte une langue sémitique. Par la suite, nous souhaitons créer une version en cadres de cette ressource, où nous regroupons les UTP dans des cadres sémantiques, en nous inspirant du modèle de FrameNet. À cette ressource, nous ajoutons les UTP anglaises et françaises, puisque cette partie du travail a une portée multilingue. La méthodologie consiste à extraire automatiquement les unités terminologiques verbales et nominales (UTV et UTN), comme Ham~ala (حمل) (télécharger) et taHmiyl (تحميل) (téléchargement). Pour ce faire, nous avons adapté un extracteur automatique existant, TermoStat (Drouin 2004). Ensuite, à l’aide des critères de validation terminologique (L’Homme 2004), nous validons le statut terminologique d’une partie des candidats. Après la validation, nous procédons à la création de fiches terminologiques, à l’aide d’un éditeur XML, pour chaque UTV et UTN retenue. Ces fiches comprennent certains éléments comme la structure actancielle des UTP et jusqu’à vingt contextes annotés. La dernière étape consiste à créer des cadres sémantiques à partir des UTP de l’ASM. Nous associons également des UTP anglaises et françaises en fonction des cadres créés. Cette association a mené à la création d’une ressource terminologique appelée « DiCoInfo : A Framed Version ». Dans cette ressource, les UTP qui partagent les mêmes propriétés sémantiques et structures actancielles sont regroupées dans des cadres sémantiques. Par exemple, le cadre sémantique Product_development regroupe des UTP comme Taw~ara (طور) (développer), to develop et développer. À la suite de ces étapes, nous avons obtenu un total de 106 UTP ASM compilées dans la version en ASM du DiCoInfo et 57 cadres sémantiques associés à ces unités dans la version en cadres du DiCoInfo. Notre recherche montre que l’ASM peut être décrite avec la méthodologie que nous avons mise au point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: Rather than being rigid, habitual behaviours may be determined by dynamic mental representations that can adapt to context changes. This adaptive potential may result from particular conditions dependent on the interaction between two sources of mental constructs activation: perceived context applicability and cognitive accessibility . Method: T wo web-shopping simulations of fering the choice between habitually chosen and non-habitually chosen food products were presented to participants. This considered two choice contexts dif fering in the habitual behaviour perceived applicability (low vs. high) and a measure of habitual behaviour chronicity . Results: Study 1 demonstrated a perceived applicability ef fect, with more habitual (non-organic) than non-habitual (organic) food products chosen in a high perceived applicability (familiar) than in a low perceived applicability (new) context. The adaptive potential of habitual behaviour was evident in the habitual products choice consistency across three successive choices, despite the decrease in perceived applicability . Study 2 evidenced the adaptive potential in strong habitual behaviour participants – high chronic accessibility – who chose a habitual product (milk) more than a non-habitual product (orange juice), even when perceived applicability was reduced (new context). Conclusion: Results portray consumers as adaptive decision makers that can flexibly cope with changes in their (inner and outer) choice contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2010

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early intervention is the key to spoken language for hearing impaired children. A severe hearing loss diagnosis in young children raises the urgent question on the optimal type of hearing aid device. As there is no recent data on comparing selection criteria for a specific hearing aid device, the goal of the Hearing Evaluation of Auditory Rehabilitation Devices (hEARd) project (Coninx & Vermeulen, 2012) evolved to collect and analyze interlingually comparable normative data on the speech perception performances of children with hearing aids and children with cochlear implants (CI). METHOD: In various institutions for hearing rehabilitation in Belgium, Germany and the Netherlands the Adaptive Auditory Speech Test AAST was used in the hEARd project, to determine speech perception abilities in kindergarten and school aged hearing impaired children. Results in the speech audiometric procedures were matched to the unaided hearing loss values of children using hearing aids and compared to results of children using CI. 277 data sets of hearing impaired children were analyzed. Results of children using hearing aids were summarized in groups as to their unaided hearing loss values. The grouping was related to the World Health Organization’s (WHO) grading of hearing impairment from mild (25–40 dB HL) to moderate (41–60 dB HL), severe (61-80 dB HL) and profound hearing impairment (80 dB HL and higher). RESULTS: AAST speech recognition results in quiet showed a significantly better performance for the CI group in comparison to the group of profoundly impaired hearing aid users as well as the group of severely impaired hearing aid users. However the CI users’ performances in speech perception in noise did not vary from the hearing aid users’ performances. Within the collected data analyses showed that children with a CI show an equivalent performance on speech perception in quiet as children using hearing aids with a “moderate” hearing impairment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The synthetic control method (SCM) is a new, popular method developed for the purpose of estimating the effect of an intervention when only one single unit has been exposed. Other similar, unexposed units are combined into a synthetic control unit intended to mimic the evolution in the exposed unit, had it not been subject to exposure. As the inference relies on only a single observational unit, the statistical inferential issue is a challenge. In this paper, we examine the statistical properties of the estimator, study a number of features potentially yielding uncertainty in the estimator, discuss the rationale for statistical inference in relation to SCM, and provide a Web-app for researchers to aid in their decision of whether SCM is powerful for a specific case study. We conclude that SCM is powerful with a limited number of controls in the donor pool and a fairly short pre-intervention time period. This holds as long as the parameter of interest is a parametric specification of the intervention effect, and the duration of post-intervention period is reasonably long, and the fit of the synthetic control unit to the exposed unit in the pre-intervention period is good.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les applications Web en général ont connu d’importantes évolutions technologiques au cours des deux dernières décennies et avec elles les habitudes et les attentes de la génération de femmes et d’hommes dite numérique. Paradoxalement à ces bouleversements technologiques et comportementaux, les logiciels d’enseignement et d’apprentissage (LEA) n’ont pas tout à fait suivi la même courbe d’évolution technologique. En effet, leur modèle de conception est demeuré si statique que leur utilité pédagogique est remise en cause par les experts en pédagogie selon lesquels les LEA actuels ne tiennent pas suffisamment compte des aspects théoriques pédagogiques. Mais comment améliorer la prise en compte de ces aspects dans le processus de conception des LEA? Plusieurs approches permettent de concevoir des LEA robustes. Cependant, un intérêt particulier existe pour l’utilisation du concept patron dans ce processus de conception tant par les experts en pédagogie que par les experts en génie logiciel. En effet, ce concept permet de capitaliser l’expérience des experts et permet aussi de simplifier de belle manière le processus de conception et de ce fait son coût. Une comparaison des travaux utilisant des patrons pour concevoir des LEA a montré qu’il n’existe pas de cadre de synergie entre les différents acteurs de l’équipe de conception, les experts en pédagogie d’un côté et les experts en génie logiciel de l’autre. De plus, les cycles de vie proposés dans ces travaux ne sont pas complets, ni rigoureusement décrits afin de permettre de développer des LEA efficients. Enfin, les travaux comparés ne montrent pas comment faire coexister les exigences pédagogiques avec les exigences logicielles. Le concept patron peut-il aider à construire des LEA robustes satisfaisant aux exigences pédagogiques ? Comme solution, cette thèse propose une approche de conception basée sur des patrons pour concevoir des LEA adaptés aux technologies du Web. Plus spécifiquement, l’approche méthodique proposée montre quelles doivent être les étapes séquentielles à prévoir pour concevoir un LEA répondant aux exigences pédagogiques. De plus, un répertoire est présenté et contient 110 patrons recensés et organisés en paquetages. Ces patrons peuvent être facilement retrouvés à l’aide du guide de recherche décrit pour être utilisés dans le processus de conception. L’approche de conception a été validée avec deux exemples d’application, permettant de conclure d’une part que l’approche de conception des LEA est réaliste et d’autre part que les patrons sont bien valides et fonctionnels. L’approche de conception de LEA proposée est originale et se démarque de celles que l’on trouve dans la littérature car elle est entièrement basée sur le concept patron. L’approche permet également de prendre en compte les exigences pédagogiques. Elle est générique car indépendante de toute plateforme logicielle ou matérielle. Toutefois, le processus de traduction des exigences pédagogiques n’est pas encore très intuitif, ni très linéaire. D’autres travaux doivent être réalisés pour compléter les résultats obtenus afin de pouvoir traduire en artéfacts exploitables par les ingénieurs logiciels les exigences pédagogiques les plus complexes et les plus abstraites. Pour la suite de cette thèse, une instanciation des patrons proposés serait intéressante ainsi que la définition d’un métamodèle basé sur des patrons qui pourrait permettre la spécification d’un langage de modélisation typique des LEA. L’ajout de patrons permettant d’ajouter une couche sémantique au niveau des LEA pourrait être envisagée. Cette couche sémantique permettra non seulement d’adapter les scénarios pédagogiques, mais aussi d’automatiser le processus d’adaptation au besoin d’un apprenant en particulier. Il peut être aussi envisagé la transformation des patrons proposés en ontologies pouvant permettre de faciliter l’évaluation des connaissances de l’apprenant, de lui communiquer des informations structurées et utiles pour son apprentissage et correspondant à son besoin d’apprentissage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Congenital vertebral malformations are common in brachycephalic “screw-tailed” dog breeds such as French bulldogs, English bulldogs, Boston terriers, and Pugs. Those vertebral malformations disrupt the normal vertebral column anatomy and biomechanics, potentially leading to deformity of the vertebral column and subsequent neurological dysfunction. The initial aim of this work was to study and determine whether the congenital vertebral malformations identified in those breeds could be translated in a radiographic classification scheme used in humans to give an improved classification, with clear and well-defined terminology, with the expectation that this would facilitate future study and clinical management in the veterinary field. Therefore, two observers who were blinded to the neurologic status of the dogs classified each vertebral malformation based on the human classification scheme of McMaster and were able to translate them successfully into a new classification scheme for veterinary use. The following aim was to assess the nature and the impact of vertebral column deformity engendered by those congenital vertebral malformations in the target breeds. As no gold standard exists in veterinary medicine for the calculation of the degree of deformity, it was elected to adapt the human equivalent, termed the Cobb angle, as a potential standard reference tool for use in veterinary practice. For the validation of the Cobb angle measurement method, a computerised semi-automatic technique was used and assessed by multiple independent observers. They observed not only that Kyphosis was the most common vertebral column deformity but also that patients with such deformity were found to be more likely to suffer from neurological deficits, more especially if their Cobb angle was above 35 degrees.