12 resultados para Low Speed Switched Reluctance Machine
em Université de Lausanne, Switzerland
Resumo:
Recent studies have shown that in humans the germinal center reactions produce three types of V(D)J mutated B cells in similar proportions, i.e. Ig-switched, IgD-IgM+ (IgM-only) and IgD+IgM+ cells, and that together they form the CD27+ compartment of recirculating B cells. We investigated the Ig isotype switch capacity of these cells. Peripheral blood B subsets were sorted and IgG subclass secretion in presence or absence of IL-4 was compared in B cell assays which lead to Ig secretion in all (coculture with EL-4 thymoma cells) or only in CD27+ (CD40L stimulation) B cells. Already switched IgG+ B cells showed no significant sequential switch and IgM-only cells also had a low switch capacity, but IgD+CD27+ switched as much as IgD+CD27- B cells to all IgG subclasses. Thus, in switched B cells some alterations compromising further switch options occur frequently; IgM-only cells may result from aborted switch. However, IgD+CD27+ human B cells, extensively V(D)J mutated and "naive" regarding switch, build up a repertoire of B cells combining (1) novel cross-reactive specificities, (2) increased differentiation capacity (including after T-independent stimulation by Staphylococcus aureus Cowan I) and (3) the capacity to produce appropriate isotypes when they respond to novel pathogens.
Resumo:
It is established that the ratio between step length (SL) and step frequency (SF) is constant over a large range of walking speed. However, few data are available about the spontaneous variability of this ratio during unconstrained outdoor walking, in particular over a sufficient number of steps. The purpose of the present study was to assess the inter- and intra-subject variability of spatio-temporal gait characteristics [SL, SF and walk ratio (WR=SL/SF)] while walking at different freely selected speeds. Twelve healthy subjects walked three times along a 100-m athletic track at: (1). a slower than preferred speed, (2). preferred speed and (3). a faster than preferred speed. Two professional GPS receivers providing 3D positions assessed the walking speed and SF with high precision (less than 0.5% error). Intra-subject variability was calculated as the variation among eight consecutive 5-s samples. WR was found to be constant at preferred and fast speeds [0.41 (0.04) m.s and 0.41 (0.05) m.s respectively] but was higher at slow speeds [0.44 (0.05) m.s]. In other words, between slow and preferred speed, the speed increase was mediated more by a change in SF than SL. The intra-subject variability of WR was low under preferred [CV, coefficient of variation = 1.9 (0.6)%] and fast [CV=1.8 (0.5)%] speed conditions, but higher under low speed condition [CV=4.1 (1.5)%]. On the other hand, the inter-subject variability of WR was 11%, 10% and 12% at slow, preferred and fast walking speeds respectively. It is concluded that the GPS method is able to capture basic gait parameters over a short period of time (5 s). A specific gait pattern for slow walking was observed. Furthermore, it seems that the walking patterns in free-living conditions exhibit low intra-individual variability, but that there is substantial variability between subjects.
Resumo:
The development of model observers for mimicking human detection strategies has followed from symmetric signals in simple noise to increasingly complex backgrounds. In this study we implement different model observers for the complex task of detecting a signal in a 3D image stack. The backgrounds come from real breast tomosynthesis acquisitions and the signals were simulated and reconstructed within the volume. Two different tasks relevant to the early detection of breast cancer were considered: detecting an 8 mm mass and detecting a cluster of microcalcifications. The model observers were calculated using a channelized Hotelling observer (CHO) with dense difference-of-Gaussian channels, and a modified (Partial prewhitening [PPW]) observer which was adapted to realistic signals which are not circularly symmetric. The sustained temporal sensitivity function was used to filter the images before applying the spatial templates. For a frame rate of five frames per second, the only CHO that we calculated performed worse than the humans in a 4-AFC experiment. The other observers were variations of PPW and outperformed human observers in every single case. This initial frame rate was a rather low speed and the temporal filtering did not affect the results compared to a data set with no human temporal effects taken into account. We subsequently investigated two higher speeds at 5, 15 and 30 frames per second. We observed that for large masses, the two types of model observers investigated outperformed the human observers and would be suitable with the appropriate addition of internal noise. However, for microcalcifications both only the PPW observer consistently outperformed the humans. The study demonstrated the possibility of using a model observer which takes into account the temporal effects of scrolling through an image stack while being able to effectively detect a range of mass sizes and distributions.
Resumo:
Several lines of research have documented early-latency non-linear response interactions between audition and touch in humans and non-human primates. That these effects have been obtained under anesthesia, passive stimulation, as well as speeded reaction time tasks would suggest that some multisensory effects are not directly influencing behavioral outcome. We investigated whether the initial non-linear neural response interactions have a direct bearing on the speed of reaction times. Electrical neuroimaging analyses were applied to event-related potentials in response to auditory, somatosensory, or simultaneous auditory-somatosensory multisensory stimulation that were in turn averaged according to trials leading to fast and slow reaction times (using a median split of individual subject data for each experimental condition). Responses to multisensory stimulus pairs were contrasted with each unisensory response as well as summed responses from the constituent unisensory conditions. Behavioral analyses indicated that neural response interactions were only implicated in the case of trials producing fast reaction times, as evidenced by facilitation in excess of probability summation. In agreement, supra-additive non-linear neural response interactions between multisensory and the sum of the constituent unisensory stimuli were evident over the 40-84 ms post-stimulus period only when reaction times were fast, whereas subsequent effects (86-128 ms) were observed independently of reaction time speed. Distributed source estimations further revealed that these earlier effects followed from supra-additive modulation of activity within posterior superior temporal cortices. These results indicate the behavioral relevance of early multisensory phenomena.
Resumo:
PURPOSE: To assess the inter/intraobserver variability of apparent diffusion coefficient (ADC) measurements in treated hepatic lesions and to compare ADC measurements in the whole lesion and in the area with the most restricted diffusion (MRDA). MATERIALS AND METHODS: Twenty-five patients with treated malignant liver lesions were examined on a 3.0T machine. After agreeing on the best ADC image, two readers independently measured the ADC values in the whole lesion and in the MRDA. These measurements were repeated 1 month later. The Bland-Altman method, Spearman correlation coefficients, and the Wilcoxon signed-rank test were used to evaluate the measurements. RESULTS: Interobserver variability for ADC measurements in the whole lesion and in the MRDA was 0.17 x 10(-3) mm(2)/s [-0.17, +0.17] and 0.43 x 10(-3) mm(2)/s [-0.45, +0.41], respectively. Intraobserver limits of agreement could be as low as [-0.10, +0.12] 10(-3) mm(2)/s and [-0.20, +0.33] 10(-3) mm(2)/s for measurements in the whole lesion and in the MRDA, respectively. CONCLUSION: A limited variability in ADC measurements does exist, and it should be considered when interpreting ADC values of hepatic malignancies. This is especially true for the measurements of the minimal ADC.
Resumo:
Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
The aim of the present study was to establish and compare the durations of the seminiferous epithelium cycles of the common shrew Sorex araneus, which is characterized by a high metabolic rate and multiple paternity, and the greater white-toothed shrew Crocidura russula, which is characterized by a low metabolic rate and a monogamous mating system. Twelve S. araneus males and fifteen C. russula males were injected intraperitoneally with 5-bromodeoxyuridine, and the testes were collected. For cycle length determinations, we applied the classical method of estimation and linear regression as a new method. With regard to variance, and even with a relatively small sample size, the new method seems to be more precise. In addition, the regression method allows the inference of information for every animal tested, enabling comparisons of different factors with cycle lengths. Our results show that not only increased testis size leads to increased sperm production, but it also reduces the duration of spermatogenesis. The calculated cycle lengths were 8.35 days for S. araneus and 12.12 days for C. russula. The data obtained in the present study provide the basis for future investigations into the effects of metabolic rate and mating systems on the speed of spermatogenesis.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Three standard radiation qualities (RQA 3, RQA 5 and RQA 9) and two screens, Kodak Lanex Regular and Insight Skeletal, were used to compare the imaging performance and dose requirements of the new Kodak Hyper Speed G and the current Kodak T-MAT G/RA medical x-ray films. The noise equivalent quanta (NEQ) and detective quantum efficiencies (DQE) of the four screen-film combinations were measured at three gross optical densities and compared with the characteristics for the Kodak CR 9000 system with GP (general purpose) and HR (high resolution) phosphor plates. The new Hyper Speed G film has double the intrinsic sensitivity of the T-MAT G/RA film and a higher contrast in the high optical density range for comparable exposure latitude. By providing both high sensitivity and high spatial resolution, the new film significantly improves the compromise between dose and image quality. As expected, the new film has a higher noise level and a lower signal-to-noise ratio than the standard film, although in the high frequency range this is compensated for by a better resolution, giving better DQE results--especially at high optical density. Both screen-film systems outperform the phosphor plates in terms of MTF and DQE for standard imaging conditions (Regular screen at RQA 5 and RQA 9 beam qualities). At low energy (RQA 3), the CR system has a comparable low-frequency DQE to screen-film systems when used with a fine screen at low and middle optical densities, and a superior low-frequency DQE at high optical density.
Resumo:
OBJECTIVES: Regarding recent progress, musculoskeletal ultrasound (US) will probably soon be integrated in standard care of patient with rheumatoid arthritis (RA). However, in daily care, quality of US machines and level of experience of sonographers are varied. We conducted a study to assess reproducibility and feasibility of an US scoring for RA, including US devices of different quality and rheumatologist with various levels of expertise in US as it would be in daily care. METHODS: The Swiss Sonography in Arthritis and Rheumatism (SONAR) group has developed a semi-quantitative score using OMERACT criteria for synovitis and erosion in RA. The score was taught to 108 rheumatologists trained in US. One year after the last workshop, 19 rheumatologists participated in the study. Scans were performed on 6 US machines ranging from low to high quality, each with a different patient. Weighted kappa was calculated for each pair of readers. RESULTS: Overall, the agreement was fair to moderate. Quality of device, experience of the sonographers and practice of the score before the study improved substantially the agreement. Agreement assessed on higher quality machine, among sonographers with good experience in US increased to substantial (median kappa for B-mode and Doppler: 0.64 and 0.41 for erosion). CONCLUSIONS: This study demonstrated feasibility and reproducibility of the Swiss US SONAR score for RA. Our results confirmed importance of the quality of US machine and the training of sonographers for the implementation of US scoring in the routine daily care of RA.
Resumo:
PURPOSE: Walking in patients with chronic low back pain (cLBP) is characterized by motor control adaptations as a protective strategy against further injury or pain. The purpose of this study was to compare the preferred walking speed, the biomechanical and the energetic parameters of walking at different speeds between patients with cLBP and healthy men individually matched for age, body mass and height. METHODS: Energy cost of walking was assessed with a breath-by-breath gas analyser; mechanical and spatiotemporal parameters of walking were computed using two inertial sensors equipped with a triaxial accelerometer and gyroscope and compared in 13 men with cLBP and 13 control men (CTR) during treadmill walking at standard (0.83, 1.11, 1.38, 1.67 m s(-1)) and preferred (PWS) speeds. Low back pain intensity (visual analogue scale, cLBP only) and perceived exertion (Borg scale) were assessed at each walking speed. RESULTS: PWS was slower in cLBP [1.17 (SD = 0.13) m s(-1)] than in CTR group [1.33 (SD = 0.11) m s(-1); P = 0.002]. No significant difference was observed between groups in mechanical work (P ≥ 0.44), spatiotemporal parameters (P ≥ 0.16) and energy cost of walking (P ≥ 0.36). At the end of the treadmill protocol, perceived exertion was significantly higher in cLBP [11.7 (SD = 2.4)] than in CTR group [9.9 (SD = 1.1); P = 0.01]. Pain intensity did not significantly increase over time (P = 0.21). CONCLUSIONS: These results do not support the hypothesis of a less efficient walking pattern in patients with cLBP and imply that high walking speeds are well tolerated by patients with moderately disabling cLBP.