89 resultados para COHERENT OTDR
em Université de Lausanne, Switzerland
Resumo:
Functional neuroimaging has undergone spectacular developments in recent years. Paradoxically, its neurobiological bases have remained elusive, resulting in an intense debate around the cellular mechanisms taking place upon activation that could contribute to the signals measured. Taking advantage of a modeling approach, we propose here a coherent neurobiological framework that not only explains several in vitro and in vivo observations but also provides a physiological basis to interpret imaging signals. First, based on a model of compartmentalized energy metabolism, we show that complex kinetics of NADH changes observed in vitro can be accounted for by distinct metabolic responses in two cell populations reminiscent of neurons and astrocytes. Second, extended application of the model to an in vivo situation allowed us to reproduce the evolution of intraparenchymal oxygen levels upon activation as measured experimentally without substantially altering the initial parameter values. Finally, applying the same model to functional neuroimaging in humans, we were able to determine that the early negative component of the blood oxygenation level-dependent response recorded with functional MRI, known as the initial dip, critically depends on the oxidative response of neurons, whereas the late aspects of the signal correspond to a combination of responses from cell types with two distinct metabolic profiles that could be neurons and astrocytes. In summary, our results, obtained with such a modeling approach, support the concept that both neuronal and glial metabolic responses form essential components of neuroimaging signals.
Resumo:
To permit the tracking of turbulent flow structures in an Eulerian frame from single-point measurements, we make use of a generalization of conventional two-dimensional quadrant analysis to three-dimensional octants. We characterize flow structures using the sequences of these octants and show how significance may be attached to particular sequences using statistical mull models. We analyze an example experiment and show how a particular dominant flow structure can be identified from the conditional probability of octant sequences. The frequency of this structure corresponds to the dominant peak in the velocity spectra and exerts a high proportion of the total shear stress. We link this structure explicitly to the propensity for sediment entrainment and show that greater insight into sediment entrainment can be obtained by disaggregating those octants that occur within the identified macroturbulence structure from those that do not. Hence, this work goes beyond critiques of Reynolds stress approaches to bed load entrainment that highlight the importance of outward interactions, to identifying and prioritizing the quadrants/octants that define particular flow structures. Key Points <list list-type=''bulleted'' id=''jgrf20196-list-0001''> <list-item id=''jgrf20196-li-0001''>A new method for analysing single point velocity data is presented <list-item id=''jgrf20196-li-0002''>Flow structures are identified by a sequence of flow states (termed octants) <list-item id=''jgrf20196-li-0003''>The identified structure exerts high stresses and causes bed-load entrainment
Resumo:
We propose a novel formulation to solve the problem of intra-voxel reconstruction of the fibre orientation distribution function (FOD) in each voxel of the white matter of the brain from diffusion MRI data. The majority of the state-of-the-art methods in the field perform the reconstruction on a voxel-by-voxel level, promoting sparsity of the orientation distribution. Recent methods have proposed a global denoising of the diffusion data using spatial information prior to reconstruction, while others promote spatial regularisation through an additional empirical prior on the diffusion image at each q-space point. Our approach reconciles voxelwise sparsity and spatial regularisation and defines a spatially structured FOD sparsity prior, where the structure originates from the spatial coherence of the fibre orientation between neighbour voxels. The method is shown, through both simulated and real data, to enable accurate FOD reconstruction from a much lower number of q-space samples than the state of the art, typically 15 samples, even for quite adverse noise conditions.
Resumo:
We conducted an experiment to assess the use of olfactory traces for spatial orientation in an open environment in rats, Rattus norvegicus. We trained rats to locate a food source at a fixed location from different starting points, in the presence or absence of visual information. A single food source was hidden in an array of 19 petri dishes regularly arranged in an open-field arena. Rats were trained to locate the food source either in white light (with full access to distant visuospatial information) or in darkness (without any visual information). In both cases, the goal was in a fixed location relative to the spatial frame of reference. The results of this experiment revealed that the presence of noncontrolled olfactory traces coherent with the spatial frame of reference enables rats to locate a unique position as accurately in darkness as with full access to visuospatial information. We hypothesize that the olfactory traces complement the use of other orientation mechanisms, such as path integration or the reliance on visuospatial information. This experiment demonstrates that rats can rely on olfactory traces for accurate orientation, and raises questions about the establishment of such traces in the absence of any other orientation mechanism. Copyright 1998 The Association for the Study of Animal Behaviour.
Resumo:
Male and female Wistar rats were treated postnatally (PND 5-16) with BSO (l-buthionine-(S,R)-sulfoximine) to provide a rat model of schizophrenia based on transient glutathione deficit. In the watermaze, BSO-treated male rats perform very efficiently in conditions where a diversity of visual information is continuously available during orientation trajectories [1]. Our hypothesis is that the treatment impairs proactive strategies anticipating future sensory information, while supporting a tight visual adjustment on memorized snapshots, i.e. compensatory reactive strategies. To test this hypothesis, BSO rats' performance was assessed in two conditions using an 8-arm radial maze task: a semi-transparent maze with no available view on the environment from maze centre [2], and a modified 2-parallel maze known to induce a neglect of the parallel pair in normal rats [3-5]. Male rats, but not females, were affected by the BSO treatment. In the semi-transparent maze, BSO males expressed a higher error rate, especially in completing the maze after an interruption. In the 2-parallel maze shape, BSO males, unlike controls, expressed no neglect of the parallel arms. This second result was in accord with a reactive strategy using accurate memory images of the contextual environment instead of a representation based on integrating relative directions. These results are coherent with a treatment-induced deficit in proactive decision strategy based on multimodal cognitive maps, compensated by accurate reactive adaptations based on the memory of local configurations. Control females did not express an efficient proactive capacity in the semi-transparent maze, neither did they show the significant neglect of the parallel arms, which might have masked the BSO induced effect. Their reduced sensitivity to BSO treatment is discussed with regard to a sex biased basal cognitive style.
Resumo:
When speech is degraded, word report is higher for semantically coherent sentences (e.g., her new skirt was made of denim) than for anomalous sentences (e.g., her good slope was done in carrot). Such increased intelligibility is often described as resulting from "top-down" processes, reflecting an assumption that higher-level (semantic) neural processes support lower-level (perceptual) mechanisms. We used time-resolved sparse fMRI to test for top-down neural mechanisms, measuring activity while participants heard coherent and anomalous sentences presented in speech envelope/spectrum noise at varying signal-to-noise ratios (SNR). The timing of BOLD responses to more intelligible speech provides evidence of hierarchical organization, with earlier responses in peri-auditory regions of the posterior superior temporal gyrus than in more distant temporal and frontal regions. Despite Sentence content × SNR interactions in the superior temporal gyrus, prefrontal regions respond after auditory/perceptual regions. Although we cannot rule out top-down effects, this pattern is more compatible with a purely feedforward or bottom-up account, in which the results of lower-level perceptual processing are passed to inferior frontal regions. Behavioral and neural evidence that sentence content influences perception of degraded speech does not necessarily imply "top-down" neural processes.
Resumo:
Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.
Resumo:
We show how an ultrafast pump-pump excitation induces strong fluorescence depletion in biological samples, such as bacteria-containing droplets, in contrast with fluorescent interferents, such as polycyclic aromatic compounds, despite similar spectroscopic properties. Application to the optical remote discrimination of biotic versus non-biotic particles is proposed. Further improvement is required to allow the discrimination of one pathogenic among other non-pathogenic micro-organisms. This improved selectivity may be reached with optimal coherent control experiments, as discussed in the paper.
Resumo:
Continuing developments in science and technology mean that the amounts of information forensic scientists are able to provide for criminal investigations is ever increasing. The commensurate increase in complexity creates difficulties for scientists and lawyers with regard to evaluation and interpretation, notably with respect to issues of inference and decision. Probability theory, implemented through graphical methods, and specifically Bayesian networks, provides powerful methods to deal with this complexity. Extensions of these methods to elements of decision theory provide further support and assistance to the judicial system. Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science provides a unique and comprehensive introduction to the use of Bayesian decision networks for the evaluation and interpretation of scientific findings in forensic science, and for the support of decision-makers in their scientific and legal tasks. Includes self-contained introductions to probability and decision theory. Develops the characteristics of Bayesian networks, object-oriented Bayesian networks and their extension to decision models. Features implementation of the methodology with reference to commercial and academically available software. Presents standard networks and their extensions that can be easily implemented and that can assist in the reader's own analysis of real cases. Provides a technique for structuring problems and organizing data based on methods and principles of scientific reasoning. Contains a method for the construction of coherent and defensible arguments for the analysis and evaluation of scientific findings and for decisions based on them. Is written in a lucid style, suitable for forensic scientists and lawyers with minimal mathematical background. Includes a foreword by Ian Evett. The clear and accessible style of this second edition makes this book ideal for all forensic scientists, applied statisticians and graduate students wishing to evaluate forensic findings from the perspective of probability and decision analysis. It will also appeal to lawyers and other scientists and professionals interested in the evaluation and interpretation of forensic findings, including decision making based on scientific information.
Resumo:
Résumé La mondialisation des marchés, les mutations du contexte économique et enfin l'impact des nouvelles technologies de l'information ont obligé les entreprises à revoir la façon dont elles gèrent leurs capitaux intellectuel (gestion des connaissances) et humain (gestion des compétences). II est communément admis aujourd'hui que ceux-ci jouent un rôle particulièrement stratégique dans l'organisation. L'entreprise désireuse de se lancer dans une politique gestion de ces capitaux devra faire face à différents problèmes. En effet, afin de gérer ces connaissances et ces compétences, un long processus de capitalisation doit être réalisé. Celui-ci doit passer par différentes étapes comme l'identification, l'extraction et la représentation des connaissances et des compétences. Pour cela, il existe différentes méthodes de gestion des connaissances et des compétences comme MASK, CommonKADS, KOD... Malheureusement, ces différentes méthodes sont très lourdes à mettre en oeuvre, et se cantonnent à certains types de connaissances et sont, par conséquent, plus limitées dans les fonctionnalités qu'elles peuvent offrir. Enfin, la gestion des compétences et la gestion des connaissances sont deux domaines dissociés alors qu'il serait intéressant d'unifier ces deux approches en une seule. En effet, les compétences sont très proches des connaissances comme le souligne la définition de la compétence qui suit : « un ensemble de connaissances en action dans un contexte donné ». Par conséquent, nous avons choisi d'appuyer notre proposition sur le concept de compétence. En effet, la compétence est parmi les connaissances de l'entreprise l'une des plus cruciales, en particulier pour éviter la perte de savoir-faire ou pour pouvoir prévenir les besoins futurs de l'entreprise, car derrière les compétences des collaborateurs, se trouve l'efficacité de l'organisation. De plus, il est possible de décrire grâce à la compétence de nombreux autres concepts de l'organisation, comme les métiers, les missions, les projets, les formations... Malheureusement, il n'existe pas réellement de consensus sur la définition de la compétence. D'ailleurs, les différentes définitions existantes, même si elles sont pleinement satisfaisantes pour les experts, ne permettent pas de réaliser un système opérationnel. Dans notre approche; nous abordons la gestion des compétences à l'aide d'une méthode de gestion des connaissances. En effet, de par leur nature même, connaissance et compétence sont intimement liées et donc une telle méthode est parfaitement adaptée à la gestion des compétences. Afin de pouvoir exploiter ces connaissances et ces compétences nous avons dû, dans un premier temps, définir les concepts organisationnels de façon claire et computationnelle. Sur cette base, nous proposons une méthodologie de construction des différents référentiels d'entreprise (référentiel de compétences, des missions, des métiers...). Pour modéliser ces différents référentiels, nous avons choisi l'ontologie, car elle permet d'obtenir des définitions cohérentes et consensuelles aux concepts tout en supportant les diversités langagières. Ensuite, nous cartographions les connaissances de l'entreprise (formations, missions, métiers...) sur ces différentes ontologies afin de pouvoir les exploiter et les diffuser. Notre approche de la gestion des connaissances et de la gestion des compétences a permis la réalisation d'un outil offrant de nombreuses fonctionnalités comme la gestion des aires de mobilités, l'analyse stratégique, les annuaires ou encore la gestion des CV. Abstract The globalization of markets, the easing of economical regulation and finally the impact of new information and communication technologies have obliged firms to re-examine the way they manage their knowledge capital (knowledge management) and their human capital (competence management). It is commonly admitted that knowledge plays a slightly strategical role in the organization. The firms who want to establish one politic of management of these capitals will have to face with different problems. To manage that knowledge, a long process of capitalization must be done. That one has different steps like identification, extraction and representation of knowledge and competences. There are some different methods of knowledge management like MASK, CommonKADS or KOD. Unfortunately, those methods are very difficult to implement and are using only some types of knowledge and are consequently more limited in the functionalities they can offer. Knowledge management and competence management are two different domain where it could be interesting to unify those to one. Indeed, competence is very close than knowledge as underline this definition: "a set of knowledge in action in a specified context". We choose in our approach to rely on the concept of competence. Indeed, the competence is one of crucial knowledge in the company, particularly to avoid the loss of know-how or to prevent future needs. Because behind collaborator's competence, we can find company efficiency. Unfortunately, there is no real consensus on the definition of the concept of competence. Moreover, existing different definitions don't permit to develop an operational system. Among other key concept, we can find jobs, mission, project, and training... Moreover, we approach different problems of the competence management under the angle of the knowledge management. Indeed, knowledge and competence are closely linked. Then, we propose a method to build different company repositories (competence, jobs, projects repositories). To model those different repositories we choose ontology because it permits to obtain coherent and consensual definitions of the concepts with support of linguistics diversities too. This building repositories method coupled with this knowledge and competence management approach permitted the realization of a tool offering functionalities like mobility management, strategical analysis, yellow pages or CV management.
Resumo:
In less than half a century, allergy, originally perceived as a rare disease, has become a major public health threat, today affecting the lives of more than 60 million people in Europe, and probably close to one billion worldwide, thereby heavily impacting the budgets of public health systems. More disturbingly, its prevalence and impact are on the rise, a development that has been associated with environmental and lifestyle changes accompanying the continuous process of urbanization and globalization. Therefore, there is an urgent need to prioritize and concert research efforts in the field of allergy, in order to achieve sustainable results on prevention, diagnosis and treatment of this most prevalent chronic disease of the 21st century.The European Academy of Allergy and Clinical Immunology (EAACI) is the leading professional organization in the field of allergy, promoting excellence in clinical care, education, training and basic and translational research, all with the ultimate goal of improving the health of allergic patients. The European Federation of Allergy and Airways Diseases Patients' Associations (EFA) is a non-profit network of allergy, asthma and Chronic Obstructive Pulmonary Disorder (COPD) patients' organizations. In support of their missions, the present EAACI Position Paper, in collaboration with EFA, highlights the most important research needs in the field of allergy to serve as key recommendations for future research funding at the national and European levels.Although allergies may involve almost every organ of the body and an array of diverse external factors act as triggers, there are several common themes that need to be prioritized in research efforts. As in many other chronic diseases, effective prevention, curative treatment and accurate, rapid diagnosis represent major unmet needs. Detailed phenotyping/endotyping stands out as widely required in order to arrange or re-categorize clinical syndromes into more coherent, uniform and treatment-responsive groups. Research efforts to unveil the basic pathophysiologic pathways and mechanisms, thus leading to the comprehension and resolution of the pathophysiologic complexity of allergies will allow for the design of novel patient-oriented diagnostic and treatment protocols. Several allergic diseases require well-controlled epidemiological description and surveillance, using disease registries, pharmacoeconomic evaluation, as well as large biobanks. Additionally, there is a need for extensive studies to bring promising new biotechnological innovations, such as biological agents, vaccines of modified allergen molecules and engineered components for allergy diagnosis, closer to clinical practice. Finally, particular attention should be paid to the difficult-to-manage, precarious and costly severe disease forms and/or exacerbations. Nonetheless, currently arising treatments, mainly in the fields of immunotherapy and biologicals, hold great promise for targeted and causal management of allergic conditions. Active involvement of all stakeholders, including Patient Organizations and policy makers are necessary to achieve the aims emphasized herein.
Resumo:
The objective of this work is to present a multitechnique approach to define the geometry, the kinematics, and the failure mechanism of a retrogressive large landslide (upper part of the La Valette landslide, South French Alps) by the combination of airborne and terrestrial laser scanning data and ground-based seismic tomography data. The advantage of combining different methods is to constrain the geometrical and failure mechanism models by integrating different sources of information. Because of an important point density at the ground surface (4. 1 points m?2), a small laser footprint (0.09 m) and an accurate three-dimensional positioning (0.07 m), airborne laser scanning data are adapted as a source of information to analyze morphological structures at the surface. Seismic tomography surveys (P-wave and S-wave velocities) may highlight the presence of low-seismic-velocity zones that characterize the presence of dense fracture networks at the subsurface. The surface displacements measured from the terrestrial laser scanning data over a period of 2 years (May 2008?May 2010) allow one to quantify the landslide activity at the direct vicinity of the identified discontinuities. An important subsidence of the crown area with an average subsidence rate of 3.07 m?year?1 is determined. The displacement directions indicate that the retrogression is controlled structurally by the preexisting discontinuities. A conceptual structural model is proposed to explain the failure mechanism and the retrogressive evolution of the main scarp. Uphill, the crown area is affected by planar sliding included in a deeper wedge failure system constrained by two preexisting fractures. Downhill, the landslide body acts as a buttress for the upper part. Consequently, the progression of the landslide body downhill allows the development of dip-slope failures, and coherent blocks start sliding along planar discontinuities. The volume of the failed mass in the crown area is estimated at 500,000 m3 with the sloping local base level method.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Phenoxyalkanoic acid degradation is well studied in Beta- and Gammaproteobacteria, but the genetic background has not been elucidated so far in Alphaproteobacteria. We report the isolation of several genes involved in dichlor- and mecoprop degradation from the alphaproteobacterium Sphingomonas herbicidovorans MH and propose that the degradation proceeds analogously to that previously reported for 2,4-dichlorophenoxyacetic acid (2,4-D). Two genes for alpha-ketoglutarate-dependent dioxygenases, sdpA(MH) and rdpA(MH), were found, both of which were adjacent to sequences with potential insertion elements. Furthermore, a gene for a dichlorophenol hydroxylase (tfdB), a putative regulatory gene (cadR), two genes for dichlorocatechol 1,2-dioxygenases (dccA(I/II)), two for dienelactone hydrolases (dccD(I/II)), part of a gene for maleylacetate reductase (dccE), and one gene for a potential phenoxyalkanoic acid permease were isolated. In contrast to other 2,4-D degraders, the sdp, rdp, and dcc genes were scattered over the genome and their expression was not tightly regulated. No coherent pattern was derived on the possible origin of the sdp, rdp, and dcc pathway genes. rdpA(MH) was 99% identical to rdpA(MC1), an (R)-dichlorprop/alpha-ketoglutarate dioxygenase from Delftia acidovorans MC1, which is evidence for a recent gene exchange between Alpha- and Betaproteobacteria. Conversely, DccA(I) and DccA(II) did not group within the known chlorocatechol 1,2-dioxygenases, but formed a separate branch in clustering analysis. This suggests a different reservoir and reduced transfer for the genes of the modified ortho-cleavage pathway in Alphaproteobacteria compared with the ones in Beta- and Gammaproteobacteria.
Resumo:
The application of two approaches for high-throughput, high-resolution X-ray phase contrast tomographic imaging being used at the tomographic microscopy and coherent radiology experiments (TOMCAT) beamline of the SLS is discussed and illustrated. Differential phase contrast (DPC) imaging, using a grating interferometer and a phase-stepping technique, is integrated into the beamline environment at TOMCAT in terms of the fast acquisition and reconstruction of data and the availability to scan samples within an aqueous environment. A second phase contrast method is a modified transfer of intensity approach that can yield the 3D distribution of the decrement of the refractive index of a weakly absorbing object from a single tomographic dataset. The two methods are complementary to one another: the DPC method is characterised by a higher sensitivity and by moderate resolution with larger samples; the modified transfer of intensity approach is particularly suited for small specimens when high resolution (around 1 mu m) is required. Both are being applied to investigations in the biological and materials science fields.