975 resultados para Performances cognitives
Resumo:
A mobile ad hoc network (MANET) is a decentralized and infrastructure-less network. This thesis aims to provide support at the system-level for developers of applications or protocols in such networks. To do this, we propose contributions in both the algorithmic realm and in the practical realm. In the algorithmic realm, we contribute to the field by proposing different context-aware broadcast and multicast algorithms in MANETs, namely six-shot broadcast, six-shot multicast, PLAN-B and ageneric algorithmic approach to optimize the power consumption of existing algorithms. For each algorithm we propose, we compare it to existing algorithms that are either probabilistic or context-aware, and then we evaluate their performance based on simulations. We demonstrate that in some cases, context-aware information, such as location or signal-strength, can improve the effciency. In the practical realm, we propose a testbed framework, namely ManetLab, to implement and to deploy MANET-specific protocols, and to evaluate their performance. This testbed framework aims to increase the accuracy of performance evaluation compared to simulations, while keeping the ease of use offered by the simulators to reproduce a performance evaluation. By evaluating the performance of different probabilistic algorithms with ManetLab, we observe that both simulations and testbeds should be used in a complementary way. In addition to the above original contributions, we also provide two surveys about system-level support for ad hoc communications in order to establish a state of the art. The first is about existing broadcast algorithms and the second is about existing middleware solutions and the way they deal with privacy and especially with location privacy. - Un réseau mobile ad hoc (MANET) est un réseau avec une architecture décentralisée et sans infrastructure. Cette thèse vise à fournir un support adéquat, au niveau système, aux développeurs d'applications ou de protocoles dans de tels réseaux. Dans ce but, nous proposons des contributions à la fois dans le domaine de l'algorithmique et dans celui de la pratique. Nous contribuons au domaine algorithmique en proposant différents algorithmes de diffusion dans les MANETs, algorithmes qui sont sensibles au contexte, à savoir six-shot broadcast,six-shot multicast, PLAN-B ainsi qu'une approche générique permettant d'optimiser la consommation d'énergie de ces algorithmes. Pour chaque algorithme que nous proposons, nous le comparons à des algorithmes existants qui sont soit probabilistes, soit sensibles au contexte, puis nous évaluons leurs performances sur la base de simulations. Nous montrons que, dans certains cas, des informations liées au contexte, telles que la localisation ou l'intensité du signal, peuvent améliorer l'efficience de ces algorithmes. Sur le plan pratique, nous proposons une plateforme logicielle pour la création de bancs d'essai, intitulé ManetLab, permettant d'implémenter, et de déployer des protocoles spécifiques aux MANETs, de sorte à évaluer leur performance. Cet outil logiciel vise à accroître la précision desévaluations de performance comparativement à celles fournies par des simulations, tout en conservant la facilité d'utilisation offerte par les simulateurs pour reproduire uneévaluation de performance. En évaluant les performances de différents algorithmes probabilistes avec ManetLab, nous observons que simulateurs et bancs d'essai doivent être utilisés de manière complémentaire. En plus de ces contributions principales, nous fournissons également deux états de l'art au sujet du support nécessaire pour les communications ad hoc. Le premier porte sur les algorithmes de diffusion existants et le second sur les solutions de type middleware existantes et la façon dont elles traitent de la confidentialité, en particulier celle de la localisation.
Resumo:
Introduction générale : Depuis peu, la colère gronde au sein des actionnaires. Certains d'entre eux s'estiment écartés à tort de certaines décisions importantes et se plaignent de ne pouvoir exercer aucune influence sur la façon dont est gérée la société, dont ils sont pourtant propriétaires. Ce sentiment d'impuissance et même d'injustice est exacerbé par l'octroi, à certains dirigeants parfois peu scrupuleux, de rémunérations astronomiques et en décalage avec les résultats obtenus. Bien que l'assemblée générale soit, aux termes de l'art. 698 al. 1 CO, le pouvoir suprême de la société, les administrateurs et les directeurs donnent l'impression d'être omnipotents et exempts de toute responsabilité Certains actionnaires estiment en d'autres termes que les sociétés anonymes souffrent d'un manque de contrôle. Ce sentiment correspond-il à la réalité ? Notre étude tente de répondre à cette question en examinant l'éventuel rapport hiérarchique entre l'assemblée générale et le conseil d'administration, les devoirs de ce dernier, les conditions auxquelles il peut déléguer la gestion, enfin, la responsabilité de ses membres. Face à l'ampleur du sujet, nous avons été contraint d'effectuer des choix, forcément arbitraires. Nous avons décidé d'écarter la problématique des groupes de sociétés. De même, les législations sur les bourses, les banques et les fusions ne seront que mentionnées. Signalons enfin que certaines problématiques abordées par notre étude occupent actuellement le législateur. Nous avons dès lors tenu compte des travaux préparatoires effectués jusqu'à la fin de l'année 2008. Nous commencerons par étudier dans une première partie les relations et l'éventuel rapport hiérarchique entre l'assemblée générale, pouvoir suprême de la société, et le conseil d'administration, chargé d'exercer la haute direction et de gérer les affaires de la société. La détermination de leurs positions hiérarchiques respectives devrait nous permettre de savoir si et comment l'assemblée générale peut s'immiscer dans les compétences du conseil d'administration. Nous nous intéresserons ensuite à la gestion de la société, le législateur postulant qu'elle doit être conjointement exercée par tous les membres du conseil d'administration dans la mesure où elle n'a pas été déléguée. Or, comme un exercice conjoint par tous les administrateurs ne convient qu'aux plus petites sociétés anonymes, la gestion est très fréquemment déléguée en pratique. Nous examinerons ainsi les conditions formelles et les limites matérielles de la délégation de la gestion. Nous étudierons en particulier les portées et contenus respectifs de l'autorisation statutaire et du règlement d'organisation, puis passerons en revue la liste de compétences intransmissibles et inaliénables du conseil d'administration dressée par l'art. 716a al. 1 CO. Nous nous attarderons ensuite sur les différents destinataires de la délégation en insistant sur la flexibilité du système suisse, avant de considérer la problématique du cumul des fonctions à la tête de la société, et de nous demander si la gestion peut être déléguée à l'assemblée générale. Nous conclurons la première partie en étudiant la manière dont l'assemblée générale peut participer à la gestion de la société, et exposerons à cet égard les récentes propositions du Conseil fédéral. Dans une deuxième partie, nous constaterons que face à l'ampleur et à la complexité des tâches qui lui incombent, il est aujourd'hui largement recommandé au conseil d'administration d'une grande société de mettre en place certains comités afin de rationnaliser sa façon de travailler et d'optimiser ainsi ses performances. Contrairement aux développements menés dans la première partie, qui concernent toutes les sociétés anonymes indépendamment de leur taille, ceux consacrés aux comités du conseil d'administration s'adressent principalement aux sociétés ouvertes au public et aux grandes sociétés non cotées. Les petites et moyennes entreprises seraient toutefois avisées de s'en inspirer. Nous traiterons de la composition, du rôle et des tâches de chacun des trois comités usuels que sont le comité de contrôle, le comité de rémunération et le comité de nomination. Nous exposerons à cet égard les recommandations du Code suisse de bonne pratique pour le gouvernement d'entreprise ainsi que certaines règles en vigueur en Grande-Bretagne et aux Etats-Unis, états précurseurs en matière de gouvernement d'entreprise. L'étude des tâches des comités nous permettra également de déterminer l'étendue de leur propre pouvoir décisionnel. Nous aborderons enfin la problématique particulièrement sensible de la répartition des compétences en matière de rémunération des organes dirigeants. Notre troisième et dernière partie sera consacrée à la responsabilité des administrateurs. Nous exposerons dans un premier temps le système de la responsabilité des administrateurs en général, en abordant les nombreuses controverses dont il fait l'objet et en nous inspirant notamment des récentes décisions du Tribunal fédéral. Comme la gestion n'est que rarement exercée conjointement par tous les administrateurs, nous traiterons dans un deuxième temps de la responsabilité des administrateurs qui l'ont déléguée. A cet égard, nous nous arrêterons également sur les conséquences d'une délégation ne respectant pas les conditions formelles. Nous terminerons notre travail par l'étude de la responsabilité des administrateurs en rapport avec les tâches confiées à un comité de conseil d'administration. Comme le conseil d'administration a des attributions intransmissibles et inaliénables et que les principes d'un bon gouvernement d'entreprise lui recommandent de confier certaines de ces tâches à des comités spécialisés, il s'agit en effet de déterminer si et dans quelle mesure une répartition des tâches au sein du conseil d'administration entraîne une répartition des responsabilités.
Resumo:
Three case studies are presented to investigate the possibility of evaluating memory and cognitive capacities of severe intellectual disability with attention given to the ecological environment. Two 22-year-old male patients and a 27-year-old male patient, all three with severe intellectual disability with no verbal communication skills, were evaluated with a new and original paradigm adapted to study cognition in humans from experimental paradigms. We developed a test based on animal models to complement the "home" scale of the Adolescent and Adult Psychoeducational Profile (AAPEP), an assessment instrument designed for adolescents and adults with severe developmental disabilities. Results show that the new instrument is helpful, not only to staff members who can better understand the poor performances of their patients in daily life activities but also in the elaboration of individual acquisition plans. These preliminary results demonstrate the interest in developing a larger controlled study and in publishing our procedure.
Resumo:
Recent findings suggest that the visuo-spatial sketchpad (VSSP) may be divided into two sub-components processing dynamic or static visual information. This model may be useful to elucidate the confusion of data concerning the functioning of the VSSP in schizophrenia. The present study examined patients with schizophrenia and matched controls in a new working memory paradigm involving dynamic (the Ball Flight Task - BFT) or static (the Static Pattern Task - SPT) visual stimuli. In the BFT, the responses of the patients were apparently based on the retention of the last set of segments of the perceived trajectory, whereas control subjects relied on a more global strategy. We assume that the patients' performances are the result of a reduced capacity in chunking visual information since they relied mainly on the retention of the last set of segments. This assumption is confirmed by the poor performance of the patients in the static task (SPT), which requires a combination of stimulus components into object representations. We assume that the static/dynamic distinction may help us to understand the VSSP deficits in schizophrenia. This distinction also raises questions about the hypothesis that visuo-spatial working memory can simply be dissociated into visual and spatial sub-components.
Resumo:
A simple and sensitive liquid chromatography-electrospray ionization mass spectrometry method was developed for the simultaneous quantification in human plasma of all selective serotonin reuptake inhibitors (citalopram, fluoxetine, fluvoxamine, paroxetine and sertraline) and their main active metabolites (desmethyl-citalopram and norfluoxetine). A stable isotope-labeled internal standard was used for each analyte to compensate for the global method variability, including extraction and ionization variations. After sample (250μl) pre-treatment with acetonitrile (500μl) to precipitate proteins, a fast solid-phase extraction procedure was performed using mixed mode Oasis MCX 96-well plate. Chromatographic separation was achieved in less than 9.0min on a XBridge C18 column (2.1×100mm; 3.5μm) using a gradient of ammonium acetate (pH 8.1; 50mM) and acetonitrile as mobile phase at a flow rate of 0.3ml/min. The method was fully validated according to Société Française des Sciences et Techniques Pharmaceutiques protocols and the latest Food and Drug Administration guidelines. Six point calibration curves were used to cover a large concentration range of 1-500ng/ml for citalopram, desmethyl-citalopram, paroxetine and sertraline, 1-1000ng/ml for fluoxetine and fluvoxamine, and 2-1000ng/ml for norfluoxetine. Good quantitative performances were achieved in terms of trueness (84.2-109.6%), repeatability (0.9-14.6%) and intermediate precision (1.8-18.0%) in the entire assay range including the lower limit of quantification. Internal standard-normalized matrix effects were lower than 13%. The accuracy profiles (total error) were mainly included in the acceptance limits of ±30% for biological samples. The method was successfully applied for routine therapeutic drug monitoring of more than 1600 patient plasma samples over 9 months. The β-expectation tolerance intervals determined during the validation phase were coherent with the results of quality control samples analyzed during routine use. This method is therefore precise and suitable both for therapeutic drug monitoring and pharmacokinetic studies in most clinical laboratories.
Resumo:
AIM: MRI and PET with 18F-fluoro-ethyl-tyrosine (FET) have been increasingly used to evaluate patients with gliomas. Our purpose was to assess the additive value of MR spectroscopy (MRS), diffusion imaging and dynamic FET-PET for glioma grading. PATIENTS, METHODS: 38 patients (42 ± 15 aged, F/M: 0.46) with untreated histologically proven brain gliomas were included. All underwent conventional MRI, MRS, diffusion sequences, and FET-PET within 3±4 weeks. Performances of tumour FET time-activity-curve, early-to-middle SUVmax ratio, choline / creatine ratio and ADC histogram distribution pattern for gliomas grading were assessed, as compared to histology. Combination of these parameters and respective odds were also evaluated. RESULTS: Tumour time-activity-curve reached the best accuracy (67%) when taken alone to distinguish between low and high-grade gliomas, followed by ADC histogram analysis (65%). Combination of time-activity-curve and ADC histogram analysis improved the sensitivity from 67% to 86% and the specificity from 63-67% to 100% (p < 0.008). On multivariate logistic regression analysis, negative slope of the tumour FET time-activity-curve however remains the best predictor of high-grade glioma (odds 7.6, SE 6.8, p = 0.022). CONCLUSION: Combination of dynamic FET-PET and diffusion MRI reached good performance for gliomas grading. The use of FET-PET/MR may be highly relevant in the initial assessment of primary brain tumours.
Resumo:
We present a study on the development and the evaluation of a fully automated radio-frequency glow discharge system devoted to the deposition of amorphous thin film semiconductors and insulators. The following aspects were carefully addressed in the design of the reactor: (1) cross contamination by dopants and unstable gases, (2) capability of a fully automated operation, (3) precise control of the discharge parameters, particularly the substrate temperature, and (4) high chemical purity. The new reactor, named ARCAM, is a multiplasma-monochamber system consisting of three separated plasma chambers located inside the same isothermal vacuum vessel. Thus, the system benefits from the advantages of multichamber systems but keeps the simplicity and low cost of monochamber systems. The evaluation of the reactor performances showed that the oven-like structure combined with a differential dynamic pumping provides a high chemical purity in the deposition chamber. Moreover, the studies of the effects associated with the plasma recycling of material from the walls and of the thermal decomposition of diborane showed that the multiplasma-monochamber design is efficient for the production of abrupt interfaces in hydrogenated amorphous silicon (a-Si:H) based devices. Also, special attention was paid to the optimization of plasma conditions for the deposition of low density of states a-Si:H. Hence, we also present the results concerning the effects of the geometry, the substrate temperature, the radio frequency power and the silane pressure on the properties of the a-Si:H films. In particular, we found that a low density of states a-Si:H can be deposited at a wide range of substrate temperatures (100°C
Resumo:
Modeling of water movement in non-saturated soil usually requires a large number of parameters and variables, such as initial soil water content, saturated water content and saturated hydraulic conductivity, which can be assessed relatively easily. Dimensional flow of water in the soil is usually modeled by a nonlinear partial differential equation, known as the Richards equation. Since this equation cannot be solved analytically in certain cases, one way to approach its solution is by numerical algorithms. The success of numerical models in describing the dynamics of water in the soil is closely related to the accuracy with which the water-physical parameters are determined. That has been a big challenge in the use of numerical models because these parameters are generally difficult to determine since they present great spatial variability in the soil. Therefore, it is necessary to develop and use methods that properly incorporate the uncertainties inherent to water displacement in soils. In this paper, a model based on fuzzy logic is used as an alternative to describe water flow in the vadose zone. This fuzzy model was developed to simulate the displacement of water in a non-vegetated crop soil during the period called the emergency phase. The principle of this model consists of a Mamdani fuzzy rule-based system in which the rules are based on the moisture content of adjacent soil layers. The performances of the results modeled by the fuzzy system were evaluated by the evolution of moisture profiles over time as compared to those obtained in the field. The results obtained through use of the fuzzy model provided satisfactory reproduction of soil moisture profiles.
Resumo:
Par le biais d'une procédure Test-Retest, la stabilité à long terme des indices standard du WISC-IV français a été évaluée. L'intervalle moyen entre les deux passations est de 2,33 ans. L'échantillon comprend 96 enfants « tout venant » âgés de huit à 12 ans. Les comparaisons entre les moyennes des deux passations ne montrent pas de différence significative pour indice de compréhension verbale (ICV), indice de raisonnement perceptif (IRP), indice de mémoire de travail (IMT), indice d'aptitude générale (IAG) et QIT. Au niveau interindividuel, les coefficients de corrélations témoignent d'une bonne stabilité à long terme pour ICV, IAG et QIT (allant de 0,81 à 0,82). Une analyse des différences de performances entre les deux passations indique une stabilité intra-individuelle satisfaisante pour IMT et IAG. Seul IAG présente donc une stabilité à long terme satisfaisante au niveau inter- et intra-individuel. By a Test-Retest procedure, this study explores the long-term stability of the French WISC-IV index scores. The average Test-Retest interval was 2.33 years. The sample consisted of 96 non-clinical children aged between 8 and 12 years. Mean difference between the two testings was not statistically significant for VCI, PRI, WMI, GAI and FSIQ. Test-Retest reliability correlations between the two assessments are high for VCI, GAI and FSIQ (ranging from .81 to .82). An analysis of the performance differences between two assessments indicates intra-individual stability for WMI and GAI. In sum, only GAI demonstrates reliable long-term stability at an inter-and intra-individual level.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Delta(9)-Tetrahydrocannabinol (THC) is frequently found in the blood of drivers suspected of driving under the influence of cannabis or involved in traffic crashes. The present study used a double-blind crossover design to compare the effects of medium (16.5 mg THC) and high doses (45.7 mg THC) of hemp milk decoctions or of a medium dose of dronabinol (20 mg synthetic THC, Marinol on several skills required for safe driving. Forensic interpretation of cannabinoids blood concentrations were attempted using the models proposed by Daldrup (cannabis influencing factor or CIF) and Huestis and coworkers. First, the time concentration-profiles of THC, 11-hydroxy-Delta(9)-tetrahydrocannabinol (11-OH-THC) (active metabolite of THC), and 11-nor-9-carboxy-Delta(9)-tetrahydrocannabinol (THCCOOH) in whole blood were determined by gas chromatography-mass spectrometry-negative ion chemical ionization. Compared to smoking studies, relatively low concentrations were measured in blood. The highest mean THC concentration (8.4 ng/mL) was achieved 1 h after ingestion of the strongest decoction. Mean maximum 11-OH-THC level (12.3 ng/mL) slightly exceeded that of THC. THCCOOH reached its highest mean concentration (66.2 ng/mL) 2.5-5.5 h after intake. Individual blood levels showed considerable intersubject variability. The willingness to drive was influenced by the importance of the requested task. Under significant cannabinoids influence, the participants refused to drive when they were asked whether they would agree to accomplish several unimportant tasks, (e.g., driving a friend to a party). Most of the participants reported a significant feeling of intoxication and did not appreciate the effects, notably those felt after drinking the strongest decoction. Road sign and tracking testing revealed obvious and statistically significant differences between placebo and treatments. A marked impairment was detected after ingestion of the strongest decoction. A CIF value, which relies on the molar ratio of main active to inactive cannabinoids, greater than 10 was found to correlate with a strong feeling of intoxication. It also matched with a significant decrease in the willingness to drive, and it matched also with a significant impairment in tracking performances. The mathematic model II proposed by Huestis et al. (1992) provided at best a rough estimate of the time of oral administration with 27% of actual values being out of range of the 95% confidence interval. The sum of THC and 11-OH-THC blood concentrations provided a better estimate of impairment than THC alone. This controlled clinical study points out the negative influence on fitness to drive after medium or high dose oral THC or dronabinol.
Resumo:
In Switzerland, 9.1% of the general population accept regular consumption of BZD. In 65-74 years age group, 2% of cases display a high-risk alcohol consumption. Moderate risk consumption is present in 5% of cases. For the BZD, the cognitive difficulties settle in an insidious way; for alcohol, the daily consumption of fairly high quantities may lead to cognitive deterioration. At early stages, alcohol abusers show preserved neuropsychological performances. Gradually, the deficits will affect the remaining cognitive functions and become irreversible. This review indicates that the chronic consumption of alcohol and BZD in old age is at the origin of major clinical difficulties that need ad hoc training both for psychiatrists and general practitioners.
Resumo:
The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.
Resumo:
OBJECTIVE: To evaluate an automated seizure detection (ASD) algorithm in EEGs with periodic and other challenging patterns. METHODS: Selected EEGs recorded in patients over 1year old were classified into four groups: A. Periodic lateralized epileptiform discharges (PLEDs) with intermixed electrical seizures. B. PLEDs without seizures. C. Electrical seizures and no PLEDs. D. No PLEDs or seizures. Recordings were analyzed by the Persyst P12 software, and compared to the raw EEG, interpreted by two experienced neurophysiologists; Positive percent agreement (PPA) and false-positive rates/hour (FPR) were calculated. RESULTS: We assessed 98 recordings (Group A=21 patients; B=29, C=17, D=31). Total duration was 82.7h (median: 1h); containing 268 seizures. The software detected 204 (=76.1%) seizures; all ictal events were captured in 29/38 (76.3%) patients; in only in 3 (7.7%) no seizures were detected. Median PPA was 100% (range 0-100; interquartile range 50-100), and the median FPR 0/h (range 0-75.8; interquartile range 0-4.5); however, lower performances were seen in the groups containing periodic discharges. CONCLUSION: This analysis provides data regarding the yield of the ASD in a particularly difficult subset of EEG recordings, showing that periodic discharges may bias the results. SIGNIFICANCE: Ongoing refinements in this technique might enhance its utility and lead to a more extensive application.
Resumo:
Over the past few years, technological breakthroughs have helpedcompetitive sports to attain new levels. Training techniques, athletes' management and methods to analyse specific technique and performancehave sharpened, leading to performance improvement. Alpine skiing is not different. The objective of the present work was to study the technique of highy skilled alpine skiers performing giant slalom, in order to determine the quantity of energy that can be produced by skiers to increase their speed. To reach this goal, several tools have been developed to allow field testing on ski slopes; a multi cameras system, a wireless synchronization system, an aerodynamic drag model and force plateforms have especially been designed and built. The analyses performed using the different tools highlighted the possibility for several athletes to increase their energy by approximately 1.5 % using muscular work. Nevertheless, the athletes were in average not able to use their muscular work in an efficient way. By offering functional tools such as drift analysis using combined data from GPS and inertial sensors, or trajectory analysis based on tracking morphological points, this research makes possible the analysis of alpine skiers technique and performance in real training conditions. The author wishes for this work to be used as a basis for continued knowledge and understanding of alpine skiing technique. - Le sport de compétition bénéficie depuis quelques années des progrès technologiques apportés par la science. Les techniques d'entraînement, le suivi des athlètes et les méthodes d'analyse deviennent plus pointus, induisant une nette amélioration des performances. Le ski alpin ne dérogeant pas à cette règle, l'objectif de ce travail était d'analyser la technique de skieurs de haut niveau en slalom géant afin de déterminer la quantité d'énergie fournie par les skieurs pour augmenter leur vitesse. Pour ce faire, il a été nécessaire de developer différents outils d'analyse adaptés aux contraintes inhérentes aux tests sur les pistes de skis; un système multi caméras, un système de synchronisation, un modèle aérodynamique et des plateformes de force ont notamment été développés. Les analyses effectuées grâce à ces différents outils ont montré qu'il était possible pour certains skieur d'augmenter leur énergie d'environ 1.5 % grâce au travail musculaire. Cependant, les athlètes n'ont en moyenne pas réussi à utiliser leur travail musculaire de manière efficace. Ce projet a également rendu possible des analyses adaptées aux conditions d'entraînement des skieurs en proposant des outils fonctionnels tels que l'analyse du drift grâce à des capteurs inertiels et GPS, ainsi que l'analyse simplifiée de trajectoires grâce au suivi de points morphologiques. L'auteur espère que ce travail servira de base pour approfondir les connaissances de la technique en ski alpin.