20 resultados para Tutorial on Computing
em Université de Lausanne, Switzerland
Resumo:
BAFF, a member of the TNF family, is a fundamental survival factor for transitional and mature B cells. BAFF overexpression leads to an expanded B cell compartment and autoimmunity in mice, and elevated amounts of BAFF can be found in the serum of autoimmune patients. APRIL is a related factor that shares receptors with BAFF yet appears to play a different biological role. The BAFF system provides not only potential insight into the development of autoreactive B cells but a relatively simple paradigm to begin considering the balancing act between survival, growth, and death that affects all cells.
Resumo:
Showing smokers their own atherosclerotic plaques might increase motivation for smoking cessation, since they underestimate their own risk for smoking-related diseases. To assess the feasibility and optimal processes of studying the impact of carotid atherosclerotic plaque screening in smokers, we enrolled 30 daily cigarette smokers, aged 40-70 years, in an observational pre-post pilot study. All smokers underwent smoking cessation counseling, nicotine replacement therapy, a carotid ultrasound, an educational tutorial on atherosclerosis, baseline and 2-month motivation to change assessment, and assessment of smoking cessation at 2 months. Participants had a mean smoking duration of 34 years (SD = 7). Carotid plaques were present in 22 smokers (73%). Between baseline and 2 months after plaque screening, motivation for smoking cessation increased from 7.4 to 8.4 out of 10 (p = .02), particularly in those with plaques (7.2 to 8.7, p = .008). At 2 months, the smoking quit rate was 63%, with a quit rate of 73% in those with plaques vs. 38% in those without plaques (p = .10). Perceived stress, anxiety, and depression did not increase after screening. 96% of respondents answered correctly at least 80% of questions regarding atherosclerosis knowledge at baseline and after 2 months. In conclusion, studying the process of screening for carotid plaques for the purpose of increasing motivation for smoking cessation, in addition to counseling and drug therapy for smoking cessation in long-term smokers, appears feasible. The impact of carotid plaque screening on smoking cessation should be examined in larger randomized controlled trials with sufficient power to assess the impact on long-term smoking cessation rates.
Resumo:
De nombreuses études ont mis en évidence le fait que les individus étaient prêts à commettre des actes discriminatoires pour autant qu'ils puissent les justifier (Crandall & Eshleman, 2003). Nous proposons de contribuer à la compréhension de ce phénomène grâce au concept de désengagement moral pour des actes discriminatoires (DMD). Nous définissons le désengagement moral comme le fait de justifier ses propres actes immoraux de manière à les rendre acceptable. Ce concept trouve ses origines dans les travaux de Bandura et al. (1996) portant sur les comportements agressifs chez des enfants. Il se compose de huit mécanismes (p.ex. le déplacement de responsabilité). Notre recherche dépasse le cadre théorique développé par Bandura et al. pour inscrire le désengagement moral dans le champ de la discrimination intergroupe. De plus, en conceptualisant le désengagement moral comme une différence individuelle, nous proposons également de présenter les premières étapes du développement d'une échelle permettant de mesurer le DMD. L'échelle de DMD a été développée en trois étapes en suivant la procédure proposée par Hinkin (1998). Tout d'abord, une liste de 72 items a été générée suivant une méthode déductive. Puis, suite à une étude (n = 13) portant sur la cohérence des items vis-à-vis du concept et de ses mécanismes, cette liste a été réduite à 40 items (5 par mécanisme). Enfin, 118 étudiants universitaires ont participé à une étude dans le but de mener des analyses factorielles (exploratoire et confirmatoire), ainsi que de tester les validités convergente, divergente et prédictive de l'échelle. La première partie de cette étude se composait de différentes échelles (p.ex. mesure de personnalité, préjugés anti-immigrés, etc.). La seconde partie de l'étude était une expérience portant sur l'évaluation d'idées de méthodes de sélection (discriminatoire versus méritocratique) des étudiants suisses et étrangers à l'université, ayant pour but de réduire la surpopulation dans les salles de cours. Les résultats obtenus sont prometteurs pour le développement de l'échelle, autant du point de vue de sa structure (p.ex. α = .82) que de sa validité. Par exemple, plus le niveau de DMD des participants était élevé, plus ils étaient favorables à une méthode de sélection discriminatoire des étudiants à l'université. L'ensemble des résultats sera présenté durant la conférence. Nous discuterons également des potentielles contributions de cette échelle pour de futurs projets de recherche. Référence : Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of moral disengagement of the exercise of moral agency. Journal of Personality and Social Psychology, 71 (2), 364-374. Crandall, C. S., & Eshleman, A. (2003). The Justification-suppression model of the expression and experience of prejudice. Psychological Bulletin, 129 (3), 414-446. Hinkin, T. R. (1998). A brief tutorial on the development of measures for use un survey questionnaires. Organizational Research Methods, 1 (1), 104.121.
Resumo:
This paper introduces a nonlinear measure of dependence between random variables in the context of remote sensing data analysis. The Hilbert-Schmidt Independence Criterion (HSIC) is a kernel method for evaluating statistical dependence. HSIC is based on computing the Hilbert-Schmidt norm of the cross-covariance operator of mapped samples in the corresponding Hilbert spaces. The HSIC empirical estimator is very easy to compute and has good theoretical and practical properties. We exploit the capabilities of HSIC to explain nonlinear dependences in two remote sensing problems: temperature estimation and chlorophyll concentration prediction from spectra. Results show that, when the relationship between random variables is nonlinear or when few data are available, the HSIC criterion outperforms other standard methods, such as the linear correlation or mutual information.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.
Resumo:
The identification and quantification of proteins and lipids is of major importance for the diagnosis, prognosis and understanding of the molecular mechanisms involved in disease development. Owing to its selectivity and sensitivity, mass spectrometry has become a key technique in analytical platforms for proteomic and lipidomic investigations. Using this technique, many strategies have been developed based on unbiased or targeted approaches to highlight or monitor molecules of interest from biomatrices. Although these approaches have largely been employed in cancer research, this type of investigation has been met by a growing interest in the field of cardiovascular disorders, potentially leading to the discovery of novel biomarkers and the development of new therapies. In this paper, we will review the different mass spectrometry-based proteomic and lipidomic strategies applied in cardiovascular diseases, especially atherosclerosis. Particular attention will be given to recent developments and the role of bioinformatics in data treatment. This review will be of broad interest to the medical community by providing a tutorial of how mass spectrometric strategies can support clinical trials.
Resumo:
Metabolic problems lead to numerous failures during clinical trials, and much effort is now devoted to developing in silico models predicting metabolic stability and metabolites. Such models are well known for cytochromes P450 and some transferases, whereas less has been done to predict the activity of human hydrolases. The present study was undertaken to develop a computational approach able to predict the hydrolysis of novel esters by human carboxylesterase hCES2. The study involved first a homology modeling of the hCES2 protein based on the model of hCES1 since the two proteins share a high degree of homology (congruent with 73%). A set of 40 known substrates of hCES2 was taken from the literature; the ligands were docked in both their neutral and ionized forms using GriDock, a parallel tool based on the AutoDock4.0 engine which can perform efficient and easy virtual screening analyses of large molecular databases exploiting multi-core architectures. Useful statistical models (e.g., r (2) = 0.91 for substrates in their unprotonated state) were calculated by correlating experimental pK(m) values with distance between the carbon atom of the substrate's ester group and the hydroxy function of Ser228. Additional parameters in the equations accounted for hydrophobic and electrostatic interactions between substrates and contributing residues. The negatively charged residues in the hCES2 cavity explained the preference of the enzyme for neutral substrates and, more generally, suggested that ligands which interact too strongly by ionic bonds (e.g., ACE inhibitors) cannot be good CES2 substrates because they are trapped in the cavity in unproductive modes and behave as inhibitors. The effects of protonation on substrate recognition and the contrasting behavior of substrates and products were finally investigated by MD simulations of some CES2 complexes.
Resumo:
Diagnosis of several neurological disorders is based on the detection of typical pathological patterns in the electroencephalogram (EEG). This is a time-consuming task requiring significant training and experience. Automatic detection of these EEG patterns would greatly assist in quantitative analysis and interpretation. We present a method, which allows automatic detection of epileptiform events and discrimination of them from eye blinks, and is based on features derived using a novel application of independent component analysis. The algorithm was trained and cross validated using seven EEGs with epileptiform activity. For epileptiform events with compensation for eyeblinks, the sensitivity was 65 +/- 22% at a specificity of 86 +/- 7% (mean +/- SD). With feature extraction by PCA or classification of raw data, specificity reduced to 76 and 74%, respectively, for the same sensitivity. On exactly the same data, the commercially available software Reveal had a maximum sensitivity of 30% and concurrent specificity of 77%. Our algorithm performed well at detecting epileptiform events in this preliminary test and offers a flexible tool that is intended to be generalized to the simultaneous classification of many waveforms in the EEG.
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.
Resumo:
Motivation: Genome-wide association studies have become widely used tools to study effects of genetic variants on complex diseases. While it is of great interest to extend existing analysis methods by considering interaction effects between pairs of loci, the large number of possible tests presents a significant computational challenge. The number of computations is further multiplied in the study of gene expression quantitative trait mapping, in which tests are performed for thousands of gene phenotypes simultaneously. Results: We present FastEpistasis, an efficient parallel solution extending the PLINK epistasis module, designed to test for epistasis effects when analyzing continuous phenotypes. Our results show that the algorithm scales with the number of processors and offers a reduction in computation time when several phenotypes are analyzed simultaneously. FastEpistasis is capable of testing the association of a continuous trait with all single nucleotide polymorphism ( SNP) pairs from 500 000 SNPs, totaling 125 billion tests, in a population of 5000 individuals in 29, 4 or 0.5 days using 8, 64 or 512 processors.
Resumo:
Traditionally, studies dealing with muscle shortening have concentrated on assessing its impact on conduction velocity, and to this end, electrodes have been located between the end-plate and tendon regions. Possible morphologic changes in surface motor unit potentials (MUPs) as a result of muscle shortening have not, as yet, been evaluated or characterized. Using a convolutional MUP model, we investigated the effects of muscle shortening on the shape, amplitude, and duration characteristics of MUPs for different electrode positions relative to the fibre-tendon junction and for different depths of the MU in the muscle (MU-to-electrode distance). It was found that the effects of muscle shortening on MUP morphology depended not only on whether the electrodes were between the end-plate and the tendon junction or beyond the tendon junction, but also on the specific distance to this junction. When the electrodes lie between the end-plate and tendon junction, it was found that (1) the muscle shortening effect is not important for superficial MUs, (2) the sensitivity of MUP amplitude to muscle shortening increases with MU-to-electrode distance, and (3) the amplitude of the MUP negative phase is not affected by muscle shortening. This study provides a basis for the interpretation of the changes in MUP characteristics in experiments where both physiological and geometrical aspects of the muscle are varied.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.