830 resultados para Sensing devices
Resumo:
In Pseudomonas aeruginosa, N-acylhomoserine lactone signals regulate the expression of several hundreds of genes, via the transcriptional regulator LasR and, in part, also via the subordinate regulator RhlR. This regulatory network termed quorum sensing contributes to the virulence of P. aeruginosa as a pathogen. The fact that two supposed PAO1 wild-type strains from strain collections were found to be defective for LasR function because of independent point mutations in the lasR gene led to the hypothesis that loss of quorum sensing might confer a selective advantage on P. aeruginosa under certain environmental conditions. A convenient plate assay for LasR function was devised, based on the observation that lasR mutants did not grow on adenosine as the sole carbon source because a key degradative enzyme, nucleoside hydrolase (Nuh), is positively controlled by LasR. The wild-type PAO1 and lasR mutants showed similar growth rates when incubated in nutrient yeast broth at pH 6.8 and 37 degrees C with good aeration. However, after termination of growth during 30 to 54 h of incubation, when the pH rose to > or = 9, the lasR mutants were significantly more resistant to cell lysis and death than was the wild type. As a consequence, the lasR mutant-to-wild-type ratio increased about 10-fold in mixed cultures incubated for 54 h. In a PAO1 culture, five consecutive cycles of 48 h of incubation sufficed to enrich for about 10% of spontaneous mutants with a Nuh(-) phenotype, and five of these mutants, which were functionally complemented by lasR(+), had mutations in lasR. The observation that, in buffered nutrient yeast broth, the wild type and lasR mutants exhibited similar low tendencies to undergo cell lysis and death suggests that alkaline stress may be a critical factor providing a selective survival advantage to lasR mutants.
Resumo:
Selostus: Maatalousekosysteemien analysointi ja sadon ennustaminen kaukokartoituksen avulla
Resumo:
Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.
Resumo:
Calcium has a pivotal role in biological functions, and serum calcium levels have been associated with numerous disorders of bone and mineral metabolism, as well as with cardiovascular mortality. Here we report results from a genome-wide association study of serum calcium, integrating data from four independent cohorts including a total of 12,865 individuals of European and Indian Asian descent. Our meta-analysis shows that serum calcium is associated with SNPs in or near the calcium-sensing receptor (CASR) gene on 3q13. The top hit with a p-value of 6.3 x 10(-37) is rs1801725, a missense variant, explaining 1.26% of the variance in serum calcium. This SNP had the strongest association in individuals of European descent, while for individuals of Indian Asian descent the top hit was rs17251221 (p = 1.1 x 10(-21)), a SNP in strong linkage disequilibrium with rs1801725. The strongest locus in CASR was shown to replicate in an independent Icelandic cohort of 4,126 individuals (p = 1.02 x 10(-4)). This genome-wide meta-analysis shows that common CASR variants modulate serum calcium levels in the adult general population, which confirms previous results in some candidate gene studies of the CASR locus. This study highlights the key role of CASR in calcium regulation.
Resumo:
A mathematical model that describes the behavior of low-resolution Fresnel lenses encoded in any low-resolution device (e.g., a spatial light modulator) is developed. The effects of low-resolution codification, such the appearance of new secondary lenses, are studied for a general case. General expressions for the phase of these lenses are developed, showing that each lens behaves as if it were encoded through all pixels of the low-resolution device. Simple expressions for the light distribution in the focal plane and its dependence on the encoded focal length are developed and commented on in detail. For a given codification device an optimum focal length is found for best lens performance. An optimization method for codification of a single lens with a short focal length is proposed.
Resumo:
This brochure explains Iowa's laws concerning the use of cell phones and other electronic communication devices while driving.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
BACKGROUND: Suction-based wound healing devices with open-pore foam interfaces are widely used to treat complex tissue defects. The impact of changes in physicochemical parameters of the wound interfaces has not been investigated. METHODS: Full-thickness wounds in diabetic mice were treated with occlusive dressing or a suction device with a polyurethane foam interface varying in mean pore size diameter. Wound surface deformation on day 2 was measured on fixed tissues. Histologic cross-sections were analyzed for granulation tissue thickness (hematoxylin and eosin), myofibroblast density (α-smooth muscle actin), blood vessel density (platelet endothelial cell adhesion molecule-1), and cell proliferation (Ki67) on day 7. RESULTS: Polyurethane foam-induced wound surface deformation increased with polyurethane foam pore diameter: 15 percent (small pore size), 60 percent (medium pore size), and 150 percent (large pore size). The extent of wound strain correlated with granulation tissue thickness that increased 1.7-fold in small pore size foam-treated wounds, 2.5-fold in medium pore size foam-treated wounds, and 4.9-fold in large pore size foam-treated wounds (p < 0.05) compared with wounds treated with an occlusive dressing. All polyurethane foams increased the number of myofibroblasts over occlusive dressing, with maximal presence in large pore size foam-treated wounds compared with all other groups (p < 0.05). CONCLUSIONS: The pore size of the interface material of suction devices has a significant impact on the wound healing response. Larger pores increased wound surface strain, tissue growth, and transformation of contractile cells. Modification of the pore size is a powerful approach for meeting biological needs of specific wounds.
Resumo:
Acid-sensing ion channels (ASICs) are non-voltage-gated sodium channels activated by an extracellular acidification. They are widely expressed in neurons of the central and peripheral nervous system. ASICs have a role in learning, the expression of fear, in neuronal death after cerebral ischemia, and in pain sensation. Tissue damage leads to the release of inflammatory mediators. There is a subpopulation of sensory neurons which are able to release the neuropeptides calcitonin gene-related peptide (CGRP) and substance P (SP). Neurogenic inflammation refers to the process whereby peripheral release of the neuropeptides CGRP and SP induces vasodilation and extravasation of plasma proteins, respectively. Our laboratory has previously shown that calcium-permeable homomeric ASIC1a channels are present in a majority of CGRP- or SP-expressing small diameter sensory neurons. In the first part of my thesis, we tested the hypothesis that a local acidification can produce an ASIC-mediated calcium-dependant neuropeptide secretion. We have first verified the co-expression of ASICs and CGRP/SP using immunochemistry and in-situ hybridization on dissociated rat dorsal root ganglion (DRG) neurons. We found that most CGRP/SP-positive neurons also expressed ASIC1a and ASIC3 subunits. Calcium imaging experiments with Fura-2 dye showed that an extracellular acidification can induce an increase of intracellular Ca2+ concentration, which is essential for secretion. This increase of intracellular Ca2+ concentration is, at least in some cells, ASIC-dependent, as it can be prevented by amiloride, an ASIC antagonist, and by Psalmotoxin (PcTx1), a specific ASIC1a antagonist. We identified a sub-population of neurons whose acid-induced Ca2+ entry was completely abolished by amiloride, an amiloride-resistant population which does not express ASICs, but rather another acid-sensing channel, possibly transient receptor potential vanilloïde 1 (TRPV1), and a population expressing both H+-gated channel types. Voltage-gated calcium channels (Cavs) may also mediate Ca2+ entry. Co-application of the Cavs inhibitors (ω-conotoxin MVIIC, Mibefradil and Nifedipine) reduced the Ca2+ increase in neurons expressing ASICs during an acidification to pH 6. This indicates that ASICs can depolarise the neuron and activate Cavs. Homomeric ASIC1a are Ca2+-permeable and allow a direct entry of Ca2+ into the cell; other ASICs mediate an indirect entry of Ca2+ by inducing a membrane depolarisation that activates Cavs. We showed with a secretion assay that CGRP secretion can be induced by extracellular acidification in cultured rat DRG neurons. Amiloride and PcTx1 were not able to inhibit the secretion at acidic pH, but BCTC, a TRPV1 inhibitor was able to decrease the secretion induced by an extracellular acidification in our in vitro secretion assay. In conclusion, these results show that in DRG neurons a mild extracellular acidification can induce a calcium-dependent neuropeptide secretion. Even if our data show that ASICs can mediate an increase of intracellular Ca2+ concentration, this appears not to be sufficient to trigger neuropeptide secretion. TRPV1, a calcium channel whose activation induces a sustained current - in contrary of ASICs - played in our experimental conditions a predominant role in neurosecretion. In the second part of my thesis, we focused on the role of ASICs in neuropathic pain. We used the spared nerve injury (SNI) model which consists in a nerve injury that induces symptoms of neuropathic pain such as mechanical allodynia. We have previously shown that the SNI model modifies ASIC currents in dissociated rat DRG neurons. We hypothesized that ASICs could play a role in the development of mechanical allodynia. The SNI model was performed on ASIC1a, -2, and -3 knock-out mice and wild type littermates. We measured mechanical allodynia on these mice with calibrated von Frey filaments. There were no differences between the wild-type and the ASIC1, or ASIC2 knockout mice. ASIC3 null mice were less sensitive than wild type mice at 21 day after SNI, indicating a role for ASIC3. Finally, to investigate other possible roles of ASICs in the perception of the environment, we measured the baseline heat responses. We used two different models; the tail flick model and the hot plate model. ASIC1a null mice showed increased thermal allodynia behaviour in the hot plate test at three different temperatures (49, 52, 55°C) compared to their wild type littermates. On the contrary, ASIC2 null mice showed reduced thermal allodynia behaviour in the hot plate test compared to their wild type littermates at the three same temperatures. We conclude that ASIC1a and ASIC2 in mice can play a role in temperature sensing. It is currently not understood how ASICs are involved in temperature sensing and what the reason for the opposed effects in the two knockout models is.
Resumo:
Iowa Traffic Control Devices and Pavement Markings: A Manual for Cities and Counties has been developed to provide state and local transportation agencies with suggestions and examples related to traffic control devices and pavement markings. Both rural and urban applications are included. The primary source of information for this document is the Manual on Uniform Traffic Control Devices (MUTCD), but many additional references have also been used. A complete listing of these is included in the appendix to this manual, and the reader is invited to consult these references for more in-depth information. The contents of this manual are not intended to represent standard practice or to imply legal requirements for installation in any particular manner. This document should be used as a supplement to the MUTCD, not as a substitute for any requirements contained therein. Engineering judgement should be applied to all decisions regarding traffic control devices and pavement markings. All references to the MUTCD in this manual apply to the millennium edition. The reader should be aware that many millennium revisions are allowed phase-in periods by the Federal Highway Administration (FHWA), ranging from two to ten years. These extended compliance periods should be considered when making decisions regarding traffic control devices and pavement markings. A new addition to the MUTCD, Part 5, “Traffic Control Devices for Low-Volume Roads,” also contains valuable recommendations for signing and marking low volume roads. This manual is presented in an easy to use threering format. Topics included in the complete guide manual may not apply to all jurisdictions and can easily be removed or modified as desired. Desired millennium MUTCD sections may be added for quick reference using the divider at the end of this document. Contents may also be available on CD-ROM in the future.
Resumo:
The objective of this project was to promote and facilitate analysis and evaluation of the impacts of road construction activities in Smart Work Zone Deployment Initiative (SWZDI) states. The two primary objectives of this project were to assess urban freeway work-zone impacts through use of remote monitoring devices, such as radar-based traffic sensors, traffic cameras, and traffic signal loop detectors, and evaluate the effectiveness of using these devices for such a purpose. Two high-volume suburban freeway work zones, located on Interstate 35/80 (I-35/I-80) through the Des Moines, Iowa metropolitan area, were evaluated at the request of the Iowa Department of Transportation (DOT).
Resumo:
Transportation agencies in Iowa are responsible for a significant public investment with the installation and maintenance of traffic control devices and pavement markings. Included in this investment are thousands of signs and other inventory items, equipment, facilities, and staff. The proper application of traffic control devices and pavement markings is critical to public safety on streets and highways, and local governments have a prescribed responsibility under the Code of Iowa to properly manage these assets. This research report addresses current traffic control and pavement marking application, maintenance, and management in Iowa.
Resumo:
The final decision on cell fate, survival versus cell death, relies on complex and tightly regulated checkpoint mechanisms. The caspase-3 protease is a predominant player in the execution of apoptosis. However, recent progress has shown that this protease paradoxically can also protect cells from death. Here, we discuss the underappreciated, protective, and prosurvival role of caspase-3 and detail the evidence showing that caspase-3, through differential processing of p120 Ras GTPase-activating protein (RasGAP), can modulate a given set of proteins to generate, depending on the intensity of the input signals, opposite outcomes (survival vs death).
Resumo:
The primary goal of this project is to demonstrate the accuracy and utility of a freezing drizzle algorithm that can be implemented on roadway environmental sensing systems (ESSs). The types of problems related to the occurrence of freezing precipitation range from simple traffic delays to major accidents that involve fatalities. Freezing drizzle can also lead to economic impacts in communities with lost work hours, vehicular damage, and downed power lines. There are means for transportation agencies to perform preventive and reactive treatments to roadways, but freezing drizzle can be difficult to forecast accurately or even detect as weather radar and surface observation networks poorly observe these conditions. The detection of freezing precipitation is problematic and requires special instrumentation and analysis. The Federal Aviation Administration (FAA) development of aircraft anti-icing and deicing technologies has led to the development of a freezing drizzle algorithm that utilizes air temperature data and a specialized sensor capable of detecting ice accretion. However, at present, roadway ESSs are not capable of reporting freezing drizzle. This study investigates the use of the methods developed for the FAA and the National Weather Service (NWS) within a roadway environment to detect the occurrence of freezing drizzle using a combination of icing detection equipment and available ESS sensors. The work performed in this study incorporated the algorithm developed initially and further modified for work with the FAA for aircraft icing. The freezing drizzle algorithm developed for the FAA was applied using data from standard roadway ESSs. The work performed in this study lays the foundation for addressing the central question of interest to winter maintenance professionals as to whether it is possible to use roadside freezing precipitation detection (e.g., icing detection) sensors to determine the occurrence of pavement icing during freezing precipitation events and the rates at which this occurs.