105 resultados para GAS INDUSTRY
Resumo:
Six gases (N((CH3)3), NH2OH, CF3COOH, HCl, NO2, O3) were selected to probe the surface of seven combustion aerosol (amorphous carbon, flame soot) and three types of TiO2 nanoparticles using heterogeneous, that is gas-surface reactions. The gas uptake to saturation of the probes was measured under molecular flow conditions in a Knudsen flow reactor and expressed as a density of surface functional groups on a particular aerosol, namely acidic (carboxylic) and basic (conjugated oxides such as pyrones, N-heterocycles) sites, carbonyl (R1-C(O)-R2) and oxidizable (olefinic, -OH) groups. The limit of detection was generally well below 1% of a formal monolayer of adsorbed probe gas. With few exceptions most investigated aerosol samples interacted with all probe gases which points to the coexistence of different functional groups on the same aerosol surface such as acidic and basic groups. Generally, the carbonaceous particles displayed significant differences in surface group density: Printex 60 amorphous carbon had the lowest density of surface functional groups throughout, whereas Diesel soot recovered from a Diesel particulate filter had the largest. The presence of basic oxides on carbonaceous aerosol particles was inferred from the ratio of uptakes of CF3COOH and HCl owing to the larger stability of the acetate compared to the chloride counterion in the resulting pyrylium salt. Both soots generated from a rich and a lean hexane diffusion flame had a large density of oxidizable groups similar to amorphous carbon FS 101. TiO2 15 had the lowest density of functional groups among the three studied TiO2 nanoparticles for all probe gases despite the smallest size of its primary particles. The used technique enabled the measurement of the uptake probability of the probe gases on the various supported aerosol samples. The initial uptake probability, g0, of the probe gas onto the supported nanoparticles differed significantly among the various investigated aerosol samples but was roughly correlated with the density of surface groups, as expected. [Authors]
Resumo:
Midazolam is a widely accepted probe for phenotyping cytochrome P4503A. A gas chromatography-mass spectrometry (GC-MS)-negative chemical ionization method is presented which allows measuring very low levels of midazolam (MID), 1-OH midazolam (1OHMID) and 4-OH midazolam (4OHMID), in plasma, after derivatization with the reagent N-tert-butyldimethylsilyl-N-methyltrifluoroacetamide. The standard curves were linear over a working range of 20 pg/ml to 5 ng/ml for the three compounds, with the mean coefficients of correlation of the calibration curves (n = 6) being 0.999 for MID and 1OHMID, and 1.0 for 4OHMID. The mean recoveries measured at 100 pg/ml, 500 pg/ml, and 2 ng/ml, ranged from 76 to 87% for MID, from 76 to 99% for 1OHMID, from 68 to 84% for 4OHMID, and from 82 to 109% for N-ethyloxazepam (internal standard). Intra- (n = 7) and inter-day (n = 8) coefficients of variation determined at three concentrations ranged from 1 to 8% for MID, from 2 to 13% for 1OHMID and from 1 to 14% for 4OHMID. The percent theoretical concentrations (accuracy) were within +/-8% for MID and 1OHMID, within +/-9% for 4OHMID at 500 pg/ml and 2 ng/ml, and within +/-28% for 4OHMID at 100 pg/ml. The limits of quantitation were found to be 10 pg/ml for the three compounds. This method can be used for phenotyping cytochrome P4503A in humans following the administration of a very low oral dose of midazolam (75 microg), without central nervous system side-effects.
Resumo:
ABSTRACT : A firm's competitive advantage can arise from internal resources as well as from an interfirm network. -This dissertation investigates the competitive advantage of a firm involved in an innovation network by integrating strategic management theory and social network theory. It develops theory and provides empirical evidence that illustrates how a networked firm enables the network value and appropriates this value in an optimal way according to its strategic purpose. The four inter-related essays in this dissertation provide a framework that sheds light on the extraction of value from an innovation network by managing and designing the network in a proactive manner. The first essay reviews research in social network theory and knowledge transfer management, and identifies the crucial factors of innovation network configuration for a firm's learning performance or innovation output. The findings suggest that network structure, network relationship, and network position all impact on a firm's performance. Although the previous literature indicates that there are disagreements about the impact of dense or spare structure, as well as strong or weak ties, case evidence from Chinese software companies reveals that dense and strong connections with partners are positively associated with firms' performance. The second essay is a theoretical essay that illustrates the limitations of social network theory for explaining the source of network value and offers a new theoretical model that applies resource-based view to network environments. It suggests that network configurations, such as network structure, network relationship and network position, can be considered important network resources. In addition, this essay introduces the concept of network capability, and suggests that four types of network capabilities play an important role in unlocking the potential value of network resources and determining the distribution of network rents between partners. This essay also highlights the contingent effects of network capability on a firm's innovation output, and explains how the different impacts of network capability depend on a firm's strategic choices. This new theoretical model has been pre-tested with a case study of China software industry, which enhances the internal validity of this theory. The third essay addresses the questions of what impact network capability has on firm innovation performance and what are the antecedent factors of network capability. This essay employs a structural equation modelling methodology that uses a sample of 211 Chinese Hi-tech firms. It develops a measurement of network capability and reveals that networked firms deal with cooperation between, and coordination with partners on different levels according to their levels of network capability. The empirical results also suggests that IT maturity, the openness of culture, management system involved, and experience with network activities are antecedents of network capabilities. Furthermore, the two-group analysis of the role of international partner(s) shows that when there is a culture and norm gap between foreign partners, a firm must mobilize more resources and effort to improve its performance with respect to its innovation network. The fourth essay addresses the way in which network capabilities influence firm innovation performance. By using hierarchical multiple regression with data from Chinese Hi-tech firms, the findings suggest that there is a significant partial mediating effect of knowledge transfer on the relationships between network capabilities and innovation performance. The findings also reveal that the impacts of network capabilities divert with the environment and strategic decision the firm has made: exploration or exploitation. Network constructing capability provides a greater positive impact on and yields more contributions to innovation performance than does network operating capability in an exploration network. Network operating capability is more important than network constructing capability for innovative firms in an exploitation network. Therefore, these findings highlight that the firm can shape the innovation network proactively for better benefits, but when it does so, it should adjust its focus and change its efforts in accordance with its innovation purposes or strategic orientation.
Resumo:
The role of busulfan (Bu) metabolites in the adverse events seen during hematopoietic stem cell transplantation and in drug interactions is not explored. Lack of availability of established analytical methods limits our understanding in this area. The present work describes a novel gas chromatography-tandem mass spectrometric assay for the analysis of sulfolane (Su) in plasma of patients receiving high-dose Bu. Su and Bu were extracted from a single 100 μL plasma sample by liquid-liquid extraction. Bu was separately derivatized with 2,3,5,6-tetrafluorothiophenolfluorinated agent. Mass spectrometric detection of the analytes was performed in the selected reaction monitoring mode on a triple quadrupole instrument after electronic impact ionization. Bu and Su were analyzed with separate chromatographic programs, lasting 5 min each. The assay for Su was found to be linear in the concentration range of 20-400 ng/mL. The method has satisfactory sensitivity (lower limit of quantification, 20 ng/mL) and precision (relative standard deviation less than 15 %) for all the concentrations tested with a good trueness (100 ± 5 %). This method was applied to measure Su from pediatric patients with samples collected 4 h after dose 1 (n = 46), before dose 7 (n = 56), and after dose 9 (n = 54) infusions of Bu. Su (mean ± SD) was detectable in plasma of patients 4 h after dose 1, and higher levels were observed after dose 9 (249.9 ± 123.4 ng/mL). This method may be used in clinical studies investigating the role of Su on adverse events and drug interactions associated with Bu therapy.
Resumo:
A gas chromatographic-mass spectrometric (GC-MS) method has been developed, for the determination of trimipramine (TRI), desmethyltrimipramine (DTRI), didesmethyltrimipramine (DDTRI), 2-hydroxytrimipramine (2-OH-TRI) and 2-hydroxydesmethyltrimipramine (2-OH-DTRI). The method includes two derivatization steps with trifluoroacetic acid anhydride and N-methyl-N-(tert.-butyldimethyl silyl)trifluoroacetamide and the use of an SE-54 capillary silica column. The limits of quantitation were found to be 2 ng/ml for DTRI and 4 ng/ml for all other substances. Besides, methods have been optimized for the hydrolysis of the glucuronic acid conjugated metabolites. This specific detection method is useful, as polymedication is a usual practice in clinical situations, and its sensitivity allows its use for single-dose pharmacokinetic studies.
Resumo:
L'imagerie est de plus en plus utilisée en médecine forensique. Actuellement, les connaissances nécessaires pour interpréter les images post mortem sont faibles et surtout celles concernant les artéfacts post mortem. Le moyen radiologique le plus utilisé en médecine légale est la tomodensitométrie multi-coupes (TDMC). Un de ses avantages est la détection de gaz dans le corps. Cette technique est utile au diagnostic d'embolie gazeuse mais sa très grande sensibilité rend visible du gaz présent même en petite quantité. Les premières expériences montrent que presque tous les corps scannés présentent du gaz surtout dans le système vasculaire. Pour cette raison, le médecin légiste est confronté à un nouveau problème : la distinction entre du gaz d'origine post-mortem et une embolie gazeuse vraie. Pour parvenir à cette distinction, il est essentiel d'étudier la distribution de ces gaz en post mortem. Aucune étude systématique n'a encore été réalisée à ce jour sur ce sujet.¦Nous avons étudié l'incidence et la distribution des gaz présents en post mortem dans les vaisseaux, dans les os, dans les tissus sous-cutanés, dans l'espace sous-dural ainsi que dans les cavités crânienne, thoracique et abdominale (82 sites au total) de manière à identifier les facteurs qui pourraient distinguer le gaz post-mortem artéfactuel d'une embolie gazeuse¦Les données TDMC de 119 cadavres ont été étudiées rétrospectivement. Les critères d'inclusion des sujets sont l'absence de lésion corporelle permettant la contamination avec l'air extérieur, et, la documentation du délai entre le moment du décès et celui du CT-scan (p.ex. rapport de police, protocole de réanimation ou témoin). La présence de gaz a été évaluée semi-quantitativement par deux radiologues et codifiée. La codification est la suivante : grade 0 = pas de gaz, grade 1 = une à quelques bulles d'air, grade 2 = structure partiellement remplie d'air, grade 3 = structure complètement remplie d'air.¦Soixante-quatre des 119 cadavres présentent du gaz (62,2%), et 56 (75,7%) ont montré du gaz dans le coeur. Du gaz a été détecté le plus fréquemment dans le parenchyme hépatique (40%); le coeur droit (ventricule 38%, atrium 35%), la veine cave inférieure (infra-rénale 30%, supra-rénale 26%), les veines sus-hépatiques (gauche 26%, moyenne 29%, droite 22 %), et les espaces du porte (29%). Nous avons constaté qu'une grande quantité de gaz liée à la putréfaction présente dans le coeur droit (grade 3) est associée à des collections de gaz dans le parenchyme hépatique (sensibilité = 100%, spécificité = 89,7%). Pour décrire nos résultats, nous avons construit une séquence d'animation qui illustre le processus de putréfaction et l'apparition des gaz à la TDMC post-mortem.¦Cette étude est la première à montrer que l'apparition post-mortem des gaz suit un modèle de distribution spécifique. L'association entre la présence de gaz intracardiaque et dans le parenchyme hépatique pourrait permettre de distinguer du gaz artéfactuel d'origine post-mortem d'une embolie gazeuse vraie. Cette étude fournit une clé pour le diagnostic de la mort due à une embolie gazeuse cardiaque sur la base d'une TDMC post-mortem.¦Abstract¦Purpose: We investigated the incidence and distribution of post-mortem gas detected with multidetector computed tomography (MDCT) to identify factors that could distinguish artifactual gas from cardiac air embolism.¦Material and Methods: MDCT data of 119 cadavers were retrospectively examined. Gas was semiquantitatively assessed in selected blood vessels, organs and body spaces (82 total sites).¦Results: Seventy-four of the 119 cadavers displayed gas (62.2%; CI 95% 52.8 to 70.9), and 56 (75.7%) displayed gas in the heart. Most gas was detected in the hepatic parenchyma (40%); right heart (38% ventricle, 35% atrium), inferior vena cava (30% infrarenally, 26% suprarenally), hepatic veins (26% left, 29% middle, 22% right), and portal spaces (29%). Male cadavers displayed gas more frequently than female cadavers. Gas was detected 5-84 h after death; therefore, the post-mortem interval could not reliably predict gas distribution (rho=0.719, p<0.0001). We found that a large amount of putrefaction-generated gas in the right heart was associated with aggregated gas bubbles in the hepatic parenchyma (sensitivity = 100%, specificity = 89.7%). In contrast, gas in the left heart (sensitivity = 41.7%, specificity = 100%) or in peri-umbilical subcutaneous tissues (sensitivity = 50%, specificity = 96.3%) could not predict gas due to putrefaction.¦Conclusion: This study is the first to show that the appearance of post-mortem gas follows a specific distribution pattern. An association between intracardiac gas and hepatic parenchymal gas could distinguish between post- mortem-generated gas and vital air embolism. We propose that this finding provides a key for diagnosing death due to cardiac air embolism.
Resumo:
Duchenne muscular dystrophy (DMD) is an X-linked genetic disease, caused by the absence of the dystrophin protein. Although many novel therapies are under development for DMD, there is currently no cure and affected individuals are often confined to a wheelchair by their teens and die in their twenties/thirties. DMD is a rare disease (prevalence <5/10,000). Even the largest countries do not have enough affected patients to rigorously assess novel therapies, unravel genetic complexities, and determine patient outcomes. TREAT-NMD is a worldwide network for neuromuscular diseases that provides an infrastructure to support the delivery of promising new therapies for patients. The harmonized implementation of national and ultimately global patient registries has been central to the success of TREAT-NMD. For the DMD registries within TREAT-NMD, individual countries have chosen to collect patient information in the form of standardized patient registries to increase the overall patient population on which clinical outcomes and new technologies can be assessed. The registries comprise more than 13,500 patients from 31 different countries. Here, we describe how the TREAT-NMD national patient registries for DMD were established. We look at their continued growth and assess how successful they have been at fostering collaboration between academia, patient organizations, and industry.
Resumo:
Agricultural workers are exposed to folpet, but biomonitoring data are limited. Phthalimide (PI), phthalamic acid (PAA), and phthalic acid (PA) are the ring metabolites of this fungicide according to animal studies, but they have not yet been measured in human urine as metabolites of folpet, only PA as a metabolite of phthalates. The objective of this study was thus to develop a reliable gas chromatography-tandem mass spectrometry (GC-MS) method to quantify the sum of PI, PAA, and PA ring-metabolites of folpet in human urine. Briefly, the method consisted of adding p-methylhippuric acid as an internal standard, performing an acid hydrolysis at 100 °C to convert ring-metabolites into PA, purifying samples by ethyl acetate extraction, and derivatizing with N,O-bis(trimethylsilyl)trifluoro acetamide prior to GC-MS analysis. The method had a detection limit of 60.2 nmol/L (10 ng/mL); it was found to be accurate (mean recovery, 97%), precise (inter- and intra-day percentage relative standard deviations <13%), and with a good linearity (R (2) > 0.98). Validation was conducted using unexposed peoples urine spiked at concentrations ranging from 4.0 to 16.1 μmol/L, along with urine samples of volunteers dosed with folpet, and of exposed workers. The method proved to be (1) suitable and accurate to determine the kinetic profile of PA equivalents in the urine of volunteers orally and dermally administered folpet and (2) relevant for the biomonitoring of exposure in workers.
Resumo:
A simple method determining airborne monoethanolamine has been developed. Monoethanolamine determination has traditionally been difficult due to analytical separation problems. Even in recent sophisticated methods, this difficulty remains as the major issue often resulting in time-consuming sample preparations. Impregnated glass fiber filters were used for sampling. Desorption of monoethanolamine was followed by capillary GC analysis and nitrogen phosphorous selective detection. Separation was achieved using a specific column for monoethanolamines (35% diphenyl and 65% dimethyl polysiloxane). The internal standard was quinoline. Derivatization steps were not needed. The calibration range was 0.5-80 μg/mL with a good correlation (R(2) = 0.996). Averaged overall precisions and accuracies were 4.8% and -7.8% for intraday (n = 30), and 10.5% and -5.9% for interday (n = 72). Mean recovery from spiked filters was 92.8% for the intraday variation, and 94.1% for the interday variation. Monoethanolamine on stored spiked filters was stable for at least 4 weeks at 5°C. This newly developed method was used among professional cleaners and air concentrations (n = 4) were 0.42 and 0.17 mg/m(3) for personal and 0.23 and 0.43 mg/m(3) for stationary measurements. The monoethanolamine air concentration method described here was simple, sensitive, and convenient both in terms of sampling and analytical analysis.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Due to important alteration caused by long time decomposition, the gases in human bodies buried for more than a year have not been investigated. For the first time, the results of gas analysis sampled from bodies recently exhumed after 30 years are presented. Adipocere formation has prevented the bodies from too important alteration, and gaseous areas were identified. The sampling was performed with airtight syringes assisted by multi-detector computed tomography (MDCT) in those specific areas. The important amount of methane (CH4), coupled to weak amounts of hydrogen (H2) and carbon dioxide (CO2), usual gaseous alteration indicators, have permitted to confirm methanogenesis mechanism for long period of alteration. H2 and CO2 produced during the first stages of the alteration process were consumed through anaerobic oxidation by methanogenic bacteria, generating CH4.
Resumo:
Ethyl glucuronide (EtG) is a minor and direct metabolite of ethanol. EtG is incorporated into the growing hair allowing retrospective investigation of chronic alcohol abuse. In this study, we report the development and the validation of a method using gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS) for the quantification of EtG in hair. EtG was extracted from about 30 mg of hair by aqueous incubation and purified by solid-phase extraction (SPE) using mixed mode extraction cartridges followed by derivation with perfluoropentanoic anhydride (PFPA). The analysis was performed in the selected reaction monitoring (SRM) mode using the transitions m/z 347-->163 (for the quantification) and m/z 347-->119 (for the identification) for EtG, and m/z 352-->163 for EtG-d(5) used as internal standard. For validation, we prepared quality controls (QC) using hair samples taken post mortem from 2 subjects with a known history of alcoholism. These samples were confirmed by a proficiency test with 7 participating laboratories. The assay linearity of EtG was confirmed over the range from 8.4 to 259.4 pg/mg hair, with a coefficient of determination (r(2)) above 0.999. The limit of detection (LOD) was estimated with 3.0 pg/mg. The lower limit of quantification (LLOQ) of the method was fixed at 8.4 pg/mg. Repeatability and intermediate precision (relative standard deviation, RSD%), tested at 4 QC levels, were less than 13.2%. The analytical method was applied to several hair samples obtained from autopsy cases with a history of alcoholism and/or lesions caused by alcohol. EtG concentrations in hair ranged from 60 to 820 pg/mg hair.