998 resultados para Bachman, Julie
Resumo:
BACKGROUND: Dyslipidemia has been linked to vascular complications of Type 1 diabetes (T1DM). We investigated the prospective associations of nuclear magnetic resonance-determined lipoprotein subclass profiles (NMR-LSP) and conventional lipid profiles with carotid intima-media thickness (IMT) in T1DM.
METHODS: NMR-LSP and conventional lipids were measured in a subset of Diabetes Control and Complications Trial (DCCT) participants (n = 455) at study entry ('baseline', 1983-89), and were related to carotid IMT determined by ultrasonography during the observational follow-up of the DCCT, the Epidemiology of Diabetes Interventions and Complications (EDIC) study, at EDIC Year 12 (2004-2006). Associations were defined using multiple linear regression stratified by gender, and following adjustment for HbA1c, diabetes duration, body mass index, albuminuria, DCCT randomization group, smoking status, statin use, and ultrasound devices.
RESULTS: In men, significant positive associations were observed between some baseline NMR-subclasses of LDL (total IDL/LDL and large LDL) and common and/or internal carotid IMT, and between conventional total- and LDL-cholesterol and non-HDL-cholesterol and common carotid IMT, at EDIC Year 12; these persisted in adjusted analyses (p < 0.05). Large LDL particles and conventional triglycerides were positively associated with common carotid IMT changes over 12 years (p < 0.05). Inverse associations of mean HDL diameter and large HDL concentrations, and positive associations of small LDL with common and/or internal carotid IMT (all p < 0.05) were found, but did not persist in adjusted analyses. No significant associations were observed in women.
CONCLUSION: NMR-LSP-derived LDL particles, in addition to conventional lipid profiles, may help in identifying men with T1DM at highest risk for vascular disease.
Resumo:
Accurately encoding the duration and temporal order of events is essential for survival and important to everyday activities, from holding conversations to driving in fast flowing traffic. Although there is a growing body of evidence that the timing of brief events (< 1s) is encoded by modality-specific mechanisms, it is not clear how such mechanisms register event duration. One approach gaining traction is a channel-based model; this envisages narrowly-tuned, overlapping timing mechanisms that respond preferentially to different durations. The channel-based model predicts that adapting to a given event duration will result in overestimating and underestimating the duration of longer and shorter events, respectively. We tested the model by having observers judge the duration of a brief (600ms) visual test stimulus following adaptation to longer (860ms) and shorter (340ms) stimulus durations. The channel-based model predicts perceived duration compression of the test stimulus in the former condition and perceived duration expansion in the latter condition. Duration compression occurred in both conditions, suggesting that the channel-based model does not adequately account for perceived duration of visual events.
Resumo:
OBJECTIVE: The present study aimed to evaluate the precision, ease of use and likelihood of future use of portion size estimation aids (PSEA).
DESIGN: A range of PSEA were used to estimate the serving sizes of a range of commonly eaten foods and rated for ease of use and likelihood of future usage.
SETTING: For each food, participants selected their preferred PSEA from a range of options including: quantities and measures; reference objects; measuring; and indicators on food packets. These PSEA were used to serve out various foods (e.g. liquid, amorphous, and composite dishes). Ease of use and likelihood of future use were noted. The foods were weighed to determine the precision of each PSEA.
SUBJECTS: Males and females aged 18-64 years (n 120).
RESULTS: The quantities and measures were the most precise PSEA (lowest range of weights for estimated portion sizes). However, participants preferred household measures (e.g. 200 ml disposable cup) - deemed easy to use (median rating of 5), likely to use again in future (all scored either 4 or 5 on a scale from 1='not very likely' to 5='very likely to use again') and precise (narrow range of weights for estimated portion sizes). The majority indicated they would most likely use the PSEA preparing a meal (94 %), particularly dinner (86 %) in the home (89 %; all P<0·001) for amorphous grain foods.
CONCLUSIONS: Household measures may be precise, easy to use and acceptable aids for estimating the appropriate portion size of amorphous grain foods.
Resumo:
Objective:Postsecondary educational attainment is the key for successful transition to adulthood, economic self-sufficiency, and good mental and physical health.Method:Secondary analyses of school leavers’ data were carried out to establish postsecondary educational trajectories of students on the autism spectrum in the United Kingdom.Results:Findings show that students with autism who had attended mainstream secondary schools enter Further Education (post-16 vocational training) and Higher Education (University) institutions at a similar rate to other students to study the full range ofsubjects on offer. However, they are more likely to be younger, study at a lower academic level, and remain living at home.Conclusion:While course completion data were not yet available, attainment data showed that prospects were improving, although more needs to be done to enable these young adults to a achieving their post secondary educational potential.
Resumo:
Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species’ threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project – and avert – future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups – including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems – www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015.
Resumo:
In this paper, a spiking neural network (SNN) architecture to simulate the sound localization ability of the mammalian auditory pathways using the interaural intensity difference cue is presented. The lateral superior olive was the inspiration for the architecture, which required the integration of an auditory periphery (cochlea) model and a model of the medial nucleus of the trapezoid body. The SNN uses leaky integrateand-fire excitatory and inhibitory spiking neurons, facilitating synapses and receptive fields. Experimentally derived headrelated transfer function (HRTF) acoustical data from adult domestic cats were employed to train and validate the localization ability of the architecture, training used the supervised learning algorithm called the remote supervision method to determine the azimuthal angles. The experimental results demonstrate that the architecture performs best when it is localizing high-frequency sound data in agreement with the biology, and also shows a high degree of robustness when the HRTF acoustical data is corrupted by noise.
Resumo:
Just as readers feel immersed when the story line adheres to their experiences, users will more easily feel immersed in a virtual environment if the behavior of the characters in that environment adheres to their expectations, based on their lifelong observations in the real world. This paper introduces a framework that allows authors to establish natural, human-like behavior, physical interaction and emotional engagement of characters living in a virtual environment. Represented by realistic virtual characters, this framework allows people to feel immersed in an Internet based virtual world in which they can meet and share experiences in a natural way as they can meet and share experiences in real life. Rather than just being visualized in a 3D space, the virtual characters (autonomous agents as well as avatars representing users) in the immersive environment facilitate social interaction and multi-party collaboration, mixing virtual with real.
Resumo:
Social experiences realized through teleconferencing systems are still quite different from face to face meetings. The awareness that we are online and in a, to some extent, lesser real world are preventing us from really engaging and enjoying the event. Several reasons account for these differences and have been identified. We think it is now time to bridge these gaps and propose inspiring and innovative solutions in order to provide realistic, believable and engaging online experiences. We present a distributed and scalable framework named REVERIE that faces these challenges and provides a mix of these solutions. Applications built on top of the framework will be able to provide interactive, truly immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems.
Resumo:
The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well-understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modeling of neural circuits found in the brain.
Resumo:
The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modelling of neural circuits found in the brain. In recent years, much of the focus in neuron modelling has moved to the study of the connectivity of spiking neural networks. Spiking neural networks provide a vehicle to understand from a computational perspective, aspects of the brain’s neural circuitry. This understanding can then be used to tackle some of the historically intractable issues with artificial neurons, such as scalability and lack of variable binding. Current knowledge of feed-forward, lateral, and recurrent connectivity of spiking neurons, and the interplay between excitatory and inhibitory neurons is beginning to shed light on these issues, by improved understanding of the temporal processing capabilities and synchronous behaviour of biological neurons. This research topic aims to amalgamate current research aimed at tackling these phenomena.
Resumo:
REVERIE (REal and Virtual Engagement in Realistic Immersive Environments [1]) targets novel research to address the demanding challenges involved with developing state-of-the-art technologies for online human interaction. The REVERIE framework enables users to meet, socialise and share experiences online by integrating cutting-edge technologies for 3D data acquisition and processing, networking, autonomy and real-time rendering. In this paper, we describe the innovative research that is showcased through the REVERIE integrated framework through richly defined use-cases which demonstrate the validity and potential for natural interaction in a virtual immersive and safe environment. Previews of the REVERIE demo and its key research components can be viewed at www.youtube.com/user/REVERIEFP7.
Resumo:
Innovation in virtual reality and motion sensing devices is pushing the development of virtual communication platforms towards completely immersive scenarios, which require full user interaction and create complex sensory experiences. This evolution influences user experiences and creates new paradigms for interaction, leading to an increased importance of user evaluation and assessment on new systems interfaces and usability, to validate platform design and development from the users’ point of view. The REVERIE research project aims to develop a virtual environment service for realistic inter-personal interaction. This paper describes the design challenges faced during the development process of user interfaces and the adopted methodological approach to user evaluation and assessment.
Resumo:
REVERIE (REal and Virtual Engagement in Realistic Immersive Environments) [1] is a multimedia and multimodal framework, which supports the creation of immersive games. The framework supports the creation of games integrating technologies such as 3D spatial audio, detection of the player’s body movement using Kinect and WIMO sensors, NPCs (Non-Playable Characters) with advanced AI capabilities featuring various levels of representation and gameplay into an immersive 3D environment. A demonstration game was developed for REVERIE, which is an adapted version of the popular Simon Says game. In the REVERIE version, a player tries to follow physical instructions issued by two autonomous agents with different degrees of realism. If a player follows a physical instruction correctly, they are awarded one point. If not, they are deducted one point. This paper presents a technical overview of the game technologies integrated in the Simon Says demo and its evaluation by players with variable computer literacy skills. Finally the potential of REVERIE as an immersive framework for gaming is discussed, followed by recommendations for improvements in future versions of the framework.
Resumo:
Considérant les difficultés que les jeunes éprouvent à entrer dans l'écrit et les nombreux bienfaits que la musique apporte à l'être humain, nous avons voulu explorer ce qu'ont en commun l'entrée dans l'écrit et l'apprentissage de la musique. Les deux modèles théoriques qui ont été retenus (Ferreiro, 2000; Upitis, 1992) nous permettent de comprendre que les enfants conceptualisent chacun des systèmes d'écriture en traversant quatre principaux niveaux. Pour connaître le niveau de conceptualisation des systèmes d'écritures alphabétique et musicale (SEA et SEM) et leur évolution en cours d'année scolaire, 32 sujets provenaient de classes régulières et 20 de classes spéciales (difficultés langagières), tous de premier cycle du primaire, ont passé un test à trois reprises (octobre, février et avril). Les résultats démontrent entre autres que les élèves qui sont plus avancés dans leur conceptualisation du SEA le sont également dans celle du SEM en début d'année scolaire.
Resumo:
La présente recherche porte sur les traces écrites laissées par des élèves de quatrième secondaire sur leur texte en situation de préparation à une évaluation sommative. Compte tenu des milieux d'où ils proviennent et de leurs connaissances antérieures, force est de constater que les compétences en lecture de ces élèves sont très variables et ont des conséquences importantes non seulement dans leur cours de français, mais aussi dans toutes les autres matières: plus les élèves avancent dans leur cheminement scolaire, plus les enseignants leur demandent de lire des textes longs et complexes. Pourtant, force est de constater que les résultats en lecture des élèves de quatrième et cinquième secondaire restent faibles. De nombreuses études ont démontré que la lecture de textes narratifs mobilise tous les aspects de la lecture et augmente les compétences générales à lire. Pour les enseignants de français, une façon courante d'évaluer la lecture au deuxième cycle du secondaire est donc de distribuer un texte narratif bref (conte, nouvelle, extrait de roman) à l'avance aux élèves et de leur suggérer d'y laisser des traces écrites de leur réflexion ou des stratégies utilisées, dans le but de favoriser leur compréhension et leur interprétation du texte. Ces traces peuvent prendre la forme, par exemple, de soulignements, de surlignements, d'annotations dans les marges, de schémas. Toutefois, peu de recherches scientifiques se sont attardées aux opinions, aux conceptions et aux pratiques des élèves lorsqu'il s'agit de se préparer à une évaluation de la sorte. Notre recherche, exploratoire et descriptive, vise deux objectifs spécifiques: premièrement, décrire l'investissement, les opinions et attitudes, les conceptions et le mode d'investissement d'élèves de quatrième secondaire et, deuxièmement, décrire les relations entre ces quatre dimensions afin de dresser un portrait global du rapport des élèves aux traces écrites lors de leur préparation à une évaluation de lecture. Pour ce faire, nous avons privilégié une collecte de données en trois phases: après avoir passé, à un échantillon de 41 élèves volontaires, des questionnaires présentant des items à coter sur une échelle de Likert, nous avons observé les textes sur lesquels des traces de leur travail de préparation étaient visibles. Pour finir, nous avons rencontré quatre de ces élèves en entrevue semi-dirigées, choisis en fonction de leurs réponses aux questionnaires et du type de traces qu'ils avaient laissées. Pour augmenter la validité des résultats, les données recueillies à chacune des phases [ont] été croisées afin d'obtenir un maximum d'informations sur le rapport des élèves aux traces écrites. Les résultats obtenus montrent que, malgré le fait que les élèves ont des perceptions généralement assez positives du recours aux traces écrites, les traces qu'ils laissent généralement sur leurs textes sont peu pertinentes dans la mesure où elles ne contribuent pas, pour la plupart, à établir des liens essentiels à la compréhension globale du texte narratif, comme les liens de causalité. Les entrevues nous apprennent que certains élèves ont effectivement l'impression que certaines traces écrites sont plus pertinentes que d'autres, mais qu'ils ignorent souvent lesquelles et comment y recourir pendant leur lecture. Le rapport aux traces écrites est complexe, dépend de la situation et varie d'un individu à l'autre en fonction de plusieurs facteurs contextuels: les expériences et connaissances antérieures de l'élève, la résistance du texte imposé et les contacts avec les pairs. Tous ces facteurs influencent l'élève dans ce qu'il pense, ce qu'il ressent et ce qu'il fait lors de la préparation à une évaluation de lecture.