58 resultados para Warren abstract machine
Resumo:
Abstract In humans, the skin is the largest organ of the body, covering up to 2m2 and weighing up to 4kg in an average adult. Its function is to preserve the body from external insults and also to retain water inside. This barrier function termed epidermal permeability barrier (EPB) is localized in the functional part of the skin: the epidermis. For this, evolution has built a complex structure of cells and lipids sealing the surface, the stratum corneum. The formation of this structure is finely tuned since it is not only formed once at birth, but renewed all life long. This active process gives a high plasticity and reactivity to skin, but also leads to various pathologies. ENaC is a sodium channel extensively studied in organs like kidney and lung due to its importance in regulating sodium homeostasis and fluid volume. It is composed of three subunits α, ß and r which are forming sodium selective channel through the cell membrane. Its presence in the skin has been demonstrated, but little is known about its physiological role. Previous work has shown that αENaC knockout mice displayed an abnormal epidermis, suggesting a role in differentiation processes that might be implicated in the EPB. The principal aim of this thesis has been to study the consequences for EPB function in mice deficient for αENaC by molecular and physiological means and to investigate the underlying molecular mechanisms. Here, the barrier function of αENaC knockout pups is impaired. Apparently not immediately after birth (permeability test) but 24h later, when evident water loss differences appeared compared to wildtypes. Neither the structural proteins of the epithelium nor the tights junctions showed any obvious alterations. In contrary, stratum corneum lipid disorders are most likely responsible for the barrier defect, accompanied by an impairment of skin surface acidification. To analyze in details this EPB defect, several hypotheses have been proposed: reduced sensibility to calcium which is the key activator far epidermal formation, or modification of ENaC-mediated ion fluxes/currents inside the epidermis. The cellular localization of ENaC and the action in the skin of CAPl, a positive regulator of ENaC, have been also studied in details. In summary, this study clearly demonstrates that ENaC is a key player in the EPB maintenance, because αENaC knockout pups are not able to adapt to the new environment (ex utero) as efficiently as the wildtypes, most likely due to impaired of sodium handling inside the epidermis. Résumé Chez l'homme, la peau est le plus grand organe, couvrant presque 2m2 et pesant près de 4kg chez l'adulte. Sa fonction principale est de protéger l'organisme des agressions extérieures mais également de conserver l'eau à l'intérieur du corps. Cette fonction nommée barrière épithéliale est localisée dans la partie fonctionnelle de la peau : l'épiderme. A cette fin, l'évolution s'est dotée d'une structure complexe composée de cellules et de lipides recouvrant la surface, la couche cornée. Sa formation est finement régulée, car elle n'est pas seulement produite à la naissance mais constamment renouvelée tout au long de la vie, ce qui lui confère une grande plasticité mais ce qui est également la cause de nombreuses pathologies. ENaC est un canal sodique très étudié dans le rein et le poumon pour son importance dans la régulation de l'homéostasie sodique et la régulation du volume du milieu intérieur. Il est composé de 3 sous unités, α, ß et y qui forment un pore sélectif pour le sodium dans les membranes. Ce canal est présent dans la peau mais sa fonction n'y est pas connue. Des travaux précédents ont pu montrer que les souris dont le gène codant pour αENaC a été invalidé présentent un épiderme pathologique, suggérant un rôle dans la différentiation et pourrait même être impliqué dans la barrière épithéliale. Le but de cette thèse fut l'étude de la barrière dans ces souris knockouts avec des méthodes moléculaires et physiologiques et la caractérisation des mécanismes moléculaire impliqués. Dans ce travail, il a été montré que les souris mutantes présentaient un défaut de la barrière. Ce défaut n'est pas visible immédiatement à la naissance (test de perméabilité), mais 24h plus tard, lorsque les tests de perte d'eau transépithéliale montrent une différence évidente avec les animaux contrôles. Ni les protéines de structures ni les jonctions serrées de l'épiderme ne présentaient d'imperfections majeures. A l'inverse, les lipides de la couche cornée présentaient un problème de maturation (expliquant le phénotype de la barrière), certainement consécutif au défaut d'acidification à la surface de la peau que nous avons observé. D'autres mécanismes ont été explorées afin d'investiguer cette anomalie de la barrière, comme la réduction de sensibilité au calcium qui est le principal activateur de la formation de l'épiderme, ou la modification des flux d'ions entre les couches de l'épiderme. La localisation cellulaire d'ENaC, et l'action de son activateur CAPl ont également été étudiés en détails. En résumé, cette étude démontre clairement qu'ENaC est un acteur important dans la formation de la barrière épithéliale, car la peau des knockouts ne s'adapte pas aussi bien que celle des sauvages au nouvel environnement ex utero à cause de la fonction d'ENaC dans les mouvements de sodium au sein même de l'épiderme. Résumé tout public Chez l'homme, la peau est le plus grand organe, couvrant presque 2m2 et pesant près de 4kg chez l'adulte. Sa fonction principale est de protéger l'organisme des agressions extérieures mais également de conserver l'eau à l'intérieur du corps. Cette fonction nommée barrière épithéliale est localisée dans la partie fonctionnelle de la peau : l'épiderme. A cette fin, l'évolution s'est dotée d'une structure complexe composée de cellules et de lipides recouvrant la surface, la couche cornée. Sa formation est finement régulée, car elle n'est pas seulement produite à la naissance mais constamment renouvelée tout au long de la vie, ce qui lui confère une grande plasticité mais ce qui est également la cause de nombreuses maladies. ENaC est une protéine formant un canal qui permet le passage sélectif de l'ion sodium à travers la paroi des cellules. Il est très étudié dans le rein pour son importance dans la récupération du sel lors de la concentration de l'urine. Ce canal est présent dans la peau mais sa fonction n'y est pas connue. Des travaux précédents ont pu montrer que les souris où le gène codant pour αENaC a été invalidé présentent un épiderme pathologique, suggérant un rôle dans la peau et plus particulièrement la fonction de barrière de l'épiderme. Le but de cette thèse fut l'étude de la fonction de barrière dans ces souris mutantes, au niveau tissulaire et cellulaire. Dans ce travail, il a été montré que les souris mutantes présentaient une peau plus perméable que celle des animaux contrôles, grâce à une machine mesurant la perte d'eau à travers la peau. Ce défaut n'est visible que 24h après la naissance, mais nous avons pu montrer que les animaux mutants perdaient quasiment 2 fois plus d'eau que les contrôles. Au niveau moléculaire, nous avons pu montrer que ce défaut provenait d'un problème de maturation des lipides qui composent la barrière de la peau. Cette maturation est incomplète vraisemblablement à cause d'un défaut de mouvement des ions dans les couches les plus superficielles de l'épiderme, et cela à cause de l'absence du canal ENaC. En résumé, cette étude démontre clairement qu'ENaC est un acteur important dans la formation de la barrière épithéliale, car la peau des mutants ne s'adapte pas aussi bien que celle des sauvages au nouvel environnement ex utero à cause de la fonction d'ENaC dans les mouvements de sodium au sein même de l'épiderme.
Resumo:
Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
OBJECTIVE: The purpose of this study was to adapt and improve a minimally invasive two-step postmortem angiographic technique for use on human cadavers. Detailed mapping of the entire vascular system is almost impossible with conventional autopsy tools. The technique described should be valuable in the diagnosis of vascular abnormalities. MATERIALS AND METHODS: Postmortem perfusion with an oily liquid is established with a circulation machine. An oily contrast agent is introduced as a bolus injection, and radiographic imaging is performed. In this pilot study, the upper or lower extremities of four human cadavers were perfused. In two cases, the vascular system of a lower extremity was visualized with anterograde perfusion of the arteries. In the other two cases, in which the suspected cause of death was drug intoxication, the veins of an upper extremity were visualized with retrograde perfusion of the venous system. RESULTS: In each case, the vascular system was visualized up to the level of the small supplying and draining vessels. In three of the four cases, vascular abnormalities were found. In one instance, a venous injection mark engendered by the self-administration of drugs was rendered visible by exudation of the contrast agent. In the other two cases, occlusion of the arteries and veins was apparent. CONCLUSION: The method described is readily applicable to human cadavers. After establishment of postmortem perfusion with paraffin oil and injection of the oily contrast agent, the vascular system can be investigated in detail and vascular abnormalities rendered visible.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The decision-making process regarding drug dose, regularly used in everyday medical practice, is critical to patients' health and recovery. It is a challenging process, especially for a drug with narrow therapeutic ranges, in which a medical doctor decides the quantity (dose amount) and frequency (dose interval) on the basis of a set of available patient features and doctor's clinical experience (a priori adaptation). Computer support in drug dose administration makes the prescription procedure faster, more accurate, objective, and less expensive, with a tendency to reduce the number of invasive procedures. This paper presents an advanced integrated Drug Administration Decision Support System (DADSS) to help clinicians/patients with the dose computing. Based on a support vector machine (SVM) algorithm, enhanced with the random sample consensus technique, this system is able to predict the drug concentration values and computes the ideal dose amount and dose interval for a new patient. With an extension to combine the SVM method and the explicit analytical model, the advanced integrated DADSS system is able to compute drug concentration-to-time curves for a patient under different conditions. A feedback loop is enabled to update the curve with a new measured concentration value to make it more personalized (a posteriori adaptation).
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.
Resumo:
OBJECTIVES: Regarding recent progress, musculoskeletal ultrasound (US) will probably soon be integrated in standard care of patient with rheumatoid arthritis (RA). However, in daily care, quality of US machines and level of experience of sonographers are varied. We conducted a study to assess reproducibility and feasibility of an US scoring for RA, including US devices of different quality and rheumatologist with various levels of expertise in US as it would be in daily care. METHODS: The Swiss Sonography in Arthritis and Rheumatism (SONAR) group has developed a semi-quantitative score using OMERACT criteria for synovitis and erosion in RA. The score was taught to 108 rheumatologists trained in US. One year after the last workshop, 19 rheumatologists participated in the study. Scans were performed on 6 US machines ranging from low to high quality, each with a different patient. Weighted kappa was calculated for each pair of readers. RESULTS: Overall, the agreement was fair to moderate. Quality of device, experience of the sonographers and practice of the score before the study improved substantially the agreement. Agreement assessed on higher quality machine, among sonographers with good experience in US increased to substantial (median kappa for B-mode and Doppler: 0.64 and 0.41 for erosion). CONCLUSIONS: This study demonstrated feasibility and reproducibility of the Swiss US SONAR score for RA. Our results confirmed importance of the quality of US machine and the training of sonographers for the implementation of US scoring in the routine daily care of RA.
Resumo:
BACKGROUND: There are limited data on the composition and smoke emissions of 'herbal' shisha products and the air quality of establishments where they are smoked. METHODS: Three studies of 'herbal' shisha were conducted: (1) samples of 'herbal' shisha products were chemically analysed; (2) 'herbal' and tobacco shisha were burned in a waterpipe smoking machine and main and sidestream smoke analysed by standard methods and (3) the air quality of six waterpipe cafes was assessed by measurement of CO, particulate and nicotine vapour content. RESULTS: We found considerable variation in heavy metal content between the three products sampled, one being particularly high in lead, chromium, nickel and arsenic. A similar pattern emerged for polycyclic aromatic hydrocarbons. Smoke emission analyses indicated that toxic byproducts produced by the combustion of 'herbal' shisha were equivalent or greater than those produced by tobacco shisha. The results of our air quality assessment demonstrated that mean PM2.5 levels and CO content were significantly higher in waterpipe establishments compared to a casino where cigarette smoking was permitted. Nicotine vapour was detected in one of the waterpipe cafes. CONCLUSIONS: 'Herbal' shisha products tested contained toxic trace metals and PAHs levels equivalent to, or in excess of, that found in cigarettes. Their mainstream and sidestream smoke emissions contained carcinogens equivalent to, or in excess of, those of tobacco products. The content of the air in the waterpipe cafes tested was potentially hazardous. These data, in aggregate, suggest that smoking 'herbal' shisha may well be dangerous to health.
Resumo:
We tested and compared performances of Roach formula, Partin tables and of three Machine Learning (ML) based algorithms based on decision trees in identifying N+ prostate cancer (PC). 1,555 cN0 and 50 cN+ PC were analyzed. Results were also verified on an independent population of 204 operated cN0 patients, with a known pN status (187 pN0, 17 pN1 patients). ML performed better, also when tested on the surgical population, with accuracy, specificity, and sensitivity ranging between 48-86%, 35-91%, and 17-79%, respectively. ML potentially allows better prediction of the nodal status of PC, potentially allowing a better tailoring of pelvic irradiation.