916 resultados para Instrumental variable probit
Resumo:
This PhD thesis describes the application of some instrumental analytical techniques suitable to the study of fundamental food products for the human diet, such as: extra virgin olive oil and dairy products. These products, widely spread in the market and with high nutritional values, are increasingly recognized healthy properties although their lipid fraction might contain some unfavorable components to the human health. The research activity has been structured in the following investigations: “Comparison of different techniques for trans fatty acids analysis” “Fatty acids analysis of outcrop milk cream samples, with particular emphasis on the content of Conjugated Linoleic Acid (CLA) and trans Fatty Acids (TFA), by using 100m high-polarity capillary column” “Evaluation of the oxidited fatty acids (OFA) content during the Parmigiano-Reggiano cheese seasoning” “Direct analysis of 4-desmethyl sterols and two dihydroxy triterpenes in saponified vegetal oils (olive oil and others) using liquid chromatography-mass spectrometry” “Quantitation of long chain poly-unsatured fatty acids (LC-PUFA) in base infant formulas by Gas Chromatography, and evaluation of the blending phases accuracy during their preparation” “Fatty acids composition of Parmigiano Reggiano cheese samples, with emphasis on trans isomers (TFA)”
Resumo:
Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.
Resumo:
Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti.
Resumo:
Several diagnostic techniques are presented for the detection of electrical fault in induction motor variable speed drives. These techinques are developed taking into account the impact of the control system on machine variables and non stationary operating conditions.
Resumo:
The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.
Resumo:
The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.
Resumo:
The consumer demand for natural, minimally processed, fresh like and functional food has lead to an increasing interest in emerging technologies. The aim of this PhD project was to study three innovative food processing technologies currently used in the food sector. Ultrasound-assisted freezing, vacuum impregnation and pulsed electric field have been investigated through laboratory scale systems and semi-industrial pilot plants. Furthermore, analytical and sensory techniques have been developed to evaluate the quality of food and vegetable matrix obtained by traditional and emerging processes. Ultrasound was found to be a valuable technique to improve the freezing process of potatoes, anticipating the beginning of the nucleation process, mainly when applied during the supercooling phase. A study of the effects of pulsed electric fields on phenol and enzymatic profile of melon juice has been realized and the statistical treatment of data was carried out through a response surface method. Next, flavour enrichment of apple sticks has been realized applying different techniques, as atmospheric, vacuum, ultrasound technologies and their combinations. The second section of the thesis deals with the development of analytical methods for the discrimination and quantification of phenol compounds in vegetable matrix, as chestnut bark extracts and olive mill waste water. The management of waste disposal in mill sector has been approached with the aim of reducing the amount of waste, and at the same time recovering valuable by-products, to be used in different industrial sectors. Finally, the sensory analysis of boiled potatoes has been carried out through the development of a quantitative descriptive procedure for the study of Italian and Mexican potato varieties. An update on flavour development in fresh and cooked potatoes has been realized and a sensory glossary, including general and specific definitions related to organic products, used in the European project Ecropolis, has been drafted.
Resumo:
My PhD project has been focused on the study of the pulsating variable stars in two ultra-faint dwarf spheroidal satellites of the Milky Way, namely, Leo IV and Hercules; and in two fields of the Large Magellanic Cloud (namely, the Gaia South Ecliptic Pole calibration field, and the 30 Doradus region) that were repeatedly observed in the KS band by the VISTA Magellanic Cloud (VMC, PI M.R. Cioni) survey of the Magellanic System.
Resumo:
Die vorliegende Dissertation behandelt die Gesamtgesteinsanalyse stabiler Siliziumisotope mit Hilfe einer „Multi Collector-ICP-MS“. Die Analysen fanden in Kooperation mit dem „Royal Museum for Central Africa“ in Belgien statt. Einer der Schwerpunkte des ersten Kapitels ist die erstmalige Analyse des δ30Si –Wertes an einem konventionellen Nu PlasmaTM „Multi-Collector ICP-MS“ Instrument, durch die Eliminierung der den 30Si “peak” überlagernden 14N16O Interferenz. Die Analyse von δ30Si wurde durch technische Modifikationen der Anlage erreicht, welche eine höherer Massenauflösung ermöglichten. Die sorgsame Charakterisierung eines adäquaten Referenzmaterials ist unabdingbar für die Abschätzung der Genauigkeit einer Messung. Die Bestimmung der „U.S. Geological Survey“ Referenzmaterialien bildet den zweiten Schwerpunkt dieses Kapitales. Die Analyse zweier hawaiianischer Standards (BHVO-1 and BHVO-2), belegt die präzise und genaue δ30Si Bestimmung und bietet Vergleichsdaten als Qualitätskontrolle für andere Labore. Das zweite Kapitel befasst sich mit kombinierter Silizium-/Sauerstoffisotope zur Untersuchung der Entstehung der Silizifizierung vulkanischer Gesteine des „Barberton Greenstone Belt“, Südafrika. Im Gegensatz zu heute, war die Silizifizierung der Oberflächennahen Schichten, einschließlich der „Chert“ Bildung, weitverbreitete Prozesse am präkambrischen Ozeanboden. Diese Horizonte sind Zeugen einer extremen Siliziummobilisierung in der Frühzeit der Erde. Dieses Kapitel behandelt die Analyse von Silizium- und Sauerstoffisotopen an drei unterschiedlichen Gesteinsprofilen mit unterschiedlich stark silizifizierten Basalten und überlagernden geschichteten „Cherts“ der 3.54, 3.45 und 3.33 Mill. Jr. alten Theespruit, Kromberg und Hooggenoeg Formationen. Siliziumisotope, Sauerstoffisotope und die SiO2-Gehalte demonstrieren in allen drei Gesteinsprofilen eine positive Korrelation mit dem Silizifizierungsgrad, jedoch mit unterschiedlichen Steigungen der δ30Si-δ18O-Verhältnisse. Meerwasser wird als Quelle des Siliziums für den Silizifizierungsprozess betrachtet. Berechnungen haben gezeigt, dass eine klassische Wasser-Gestein Wechselwirkung die Siliziumisotopenvariation nicht beeinflussen kann, da die Konzentration von Si im Meerwasser zu gering ist (49 ppm). Die Daten stimmen mit einer Zwei-Endglieder-Komponentenmischung überein, mit Basalt und „Chert“ als jeweilige Endglieder. Unsere gegenwärtigen Daten an den „Cherts“ bestätigen einen Anstieg der Isotopenzusammensetzung über der Zeit. Mögliche Faktoren, die für unterschiedliche Steigungen der δ30Si-δ18O Verhältnisse verantwortlich sein könnten sind Veränderungen in der Meerwasserisotopie, der Wassertemperatur oder sekundäre Alterationseffekte. Das letzte Kapitel beinhaltet potentielle Variationen in der Quellregion archaischer Granitoide: die Si-Isotopen Perspektive. Natriumhaltige Tonalit-Trondhjemit-Granodiorit (TTG) Intrusiva repräsentieren große Anteile der archaischen Kruste. Im Gegensatz dazu ist die heutige Kruste kaliumhaltiger (GMS-Gruppe: Granit-Monzonite-Syenite). Prozesse, die zu dem Wechsel von natriumhaltiger zu kaliumhaltiger Kruste führten sind die Thematik diesen Kapitels. Siliziumisotopenmessungen wurden hier kombiniert mit Haupt- und Spurenelementanalysen an unterschiedlichen Generationen der 3.55 bis 3.10 Mill. Yr. alten TTG und GMS Intrusiva aus dem Arbeitsgebiet. Die δ30Si-Werte in den unterschiedlichen Plutonit Generationen zeigen einen leichten Anstieg der Isotopie mit der Zeit, wobei natriumhaltige Intrusiva die niedrigste Si-Isotopenzusammensetzung aufweisen. Der leichte Anstieg in der Siliziumisotopenzusammensetzung über die Zeit könnte auf unterschiedliche Temperaturbedingungen in der Quellregion der Granitoide hinweisen. Die Entstehung von Na-reichen, leichten d30Si Granitoiden würde demnach bei höheren Temperaturen erfolgen. Die Ähnlichkeit der δ30Si-Werte in archaischen K-reichen Plutoniten und phanerozoischen K-reichen Plutoniten wird ebenfalls deutlich.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Food suppliers currently measure apple quality considering basic pomological descriptors. Sensory analysis is expensive, does not permit to analyse many samples, and cannot be implemented for measuring quality properties in real time. However, sensory analysis is the best way to precisely describe food eating quality, since it is able to define, measure, and explain what is really perceivable by human senses and using a language that closely reflects the consumers’ perception. On the basis of such observations, we developed a detailed protocol for apple sensory profiling by descriptive sensory analysis and instrumental measurements. The collected sensory data were validated by applying rigorous scientific criteria for sensory analysis. The method was then applied for studying sensory properties of apples and their changes in relation to different pre- and post-harvest factors affecting fruit quality, and demonstrated to be able to discriminate fruit varieties and to highlight differences in terms of sensory properties. The instrumental measurements confirmed such results. Moreover, the correlation between sensory and instrumental data was studied, and a new effective approach was defined for the reliable prediction of sensory properties by instrumental characterisation. It is therefore possible to propose the application of this sensory-instrumental tool to all the stakeholders involved in apple production and marketing, to have a reliable description of apple fruit quality.
Resumo:
An extensive study of the morphology and the dynamics of the equatorial ionosphere over South America is presented here. A multi parametric approach is used to describe the physical characteristics of the ionosphere in the regions where the combination of the thermospheric electric field and the horizontal geomagnetic field creates the so-called Equatorial Ionization Anomalies. Ground based measurements from GNSS receivers are used to link the Total Electron Content (TEC), its spatial gradients and the phenomenon known as scintillation that can lead to a GNSS signal degradation or even to a GNSS signal ‘loss of lock’. A new algorithm to highlight the features characterizing the TEC distribution is developed in the framework of this thesis and the results obtained are validated and used to improve the performance of a GNSS positioning technique (long baseline RTK). In addition, the correlation between scintillation and dynamics of the ionospheric irregularities is investigated. By means of a software, here implemented, the velocity of the ionospheric irregularities is evaluated using high sampling rate GNSS measurements. The results highlight the parallel behaviour of the amplitude scintillation index (S4) occurrence and the zonal velocity of the ionospheric irregularities at least during severe scintillations conditions (post-sunset hours). This suggests that scintillations are driven by TEC gradients as well as by the dynamics of the ionospheric plasma. Finally, given the importance of such studies for technological applications (e.g. GNSS high-precision applications), a validation of the NeQuick model (i.e. the model used in the new GALILEO satellites for TEC modelling) is performed. The NeQuick performance dramatically improves when data from HF radar sounding (ionograms) are ingested. A custom designed algorithm, based on the image recognition technique, is developed to properly select the ingested data, leading to further improvement of the NeQuick performance.
Resumo:
The work of this thesis is on the implementation of a variable stiffness joint antagonistically actuated by a couple of twisted-string actuator (TSA). This type of joint is possible to be applied in the field of robotics, like UB Hand IV (the anthropomorphic robotic hand developed by University of Bologna). The purposes of the activities are to build the joint dynamic model and simultaneously control the position and stiffness. Three different control approaches (Feedback linearization, PID, PID+Feedforward) are proposed and validated in simulation. To improve the properties of joint stiffness, a joint with elastic element is taken into account and discussed. To the end, the experimental setup that has been developed for the experimental validation of the proposed control approaches.
Resumo:
Der atmosphärische Kreislauf reaktiver Stickstoffverbindungen beschäftigt sowohl die Naturwissenschaftler als auch die Politik. Dies ist insbesondere darauf zurückzuführen, dass reaktive Stickoxide die Bildung von bodennahem Ozon kontrollieren. Reaktive Stickstoffverbindungen spielen darüber hinaus als gasförmige Vorläufer von Feinstaubpartikeln eine wichtige Rolle und der Transport von reaktivem Stickstoff über lange Distanzen verändert den biogeochemischen Kohlenstoffkreislauf des Planeten, indem er entlegene Ökosysteme mit Stickstoff düngt. Die Messungen von stabilen Stickstoffisotopenverhältnissen (15N/14N) bietet ein Hilfsmittel, welches es erlaubt, die Quellen von reaktiven Stickstoffverbindungen zu identifizieren und die am Stickstoffkeislauf beteiligten Reaktionen mithilfe ihrer reaktionsspezifischen Isotopenfraktionierung genauer zu untersuchen. rnIn dieser Doktorarbeit demonstriere ich, dass es möglich ist, mit Hilfe von Nano-Sekundärionenmassenspektrometrie (NanoSIMS) verschiedene stickstoffhaltige Verbindungen, die üblicherweise in atmosphärischen Feinstaubpartikeln vorkommen, mit einer räumlichen Auflösung von weniger als einem Mikrometer zu analysieren und zu identifizieren. Die Unterscheidung verschiedener stickstoffhaltiger Verbindungen erfolgt anhand der relativen Signalintensitäten der positiven und negativen Sekundärionensignale, die beobachtet werden, wenn die Feinstaubproben mit einem Cs+ oder O- Primärionenstrahl beschossen werden. Die Feinstaubproben können direkt auf dem Probenahmesubstrat in das Massenspektrometer eingeführt werden, ohne chemisch oder physikalisch aufbereited zu werden. Die Methode wurde Mithilfe von Nitrat, Nitrit, Ammoniumsulfat, Harnstoff, Aminosären, biologischen Feinstaubproben (Pilzsporen) und Imidazol getestet. Ich habe gezeigt, dass NO2 Sekundärionen nur beim Beschuss von Nitrat und Nitrit (Salzen) mit positiven Primärionen entstehen, während NH4+ Sekundärionen nur beim Beschuss von Aminosäuren, Harnstoff und Ammoniumsalzen mit positiven Primärionen freigesetzt werden, nicht aber beim Beschuss biologischer Proben wie z.B. Pilzsporen. CN- Sekundärionen werden beim Beschuss aller stickstoffhaltigen Verbindungen mit positiven Primärionen beobachtet, da fast alle Proben oberflächennah mit Kohlenstoffspuren kontaminiert sind. Die relative Signalintensität der CN- Sekundärionen ist bei kohlenstoffhaltigen organischen Stickstoffverbindungen am höchsten.rnDarüber hinaus habe ich gezeigt, dass an reinen Nitratsalzproben (NaNO3 und KNO3), welche auf Goldfolien aufgebracht wurden speziesspezifische stabile Stickstoffisotopenverhältnisse mithilfe des 15N16O2- / 14N16O2- - Sekundärionenverhältnisses genau und richtig gemessen werden können. Die Messgenauigkeit auf Feldern mit einer Rastergröße von 5×5 µm2 wurde anhand von Langzeitmessungen an einem hausinternen NaNO3 Standard als ± 0.6 ‰ bestimmt. Die Differenz der matrixspezifischen instrumentellen Massenfraktionierung zwischen NaNO3 und KNO3 betrug 7.1 ± 0.9 ‰. 23Na12C2- Sekundärionen können eine ernst zu nehmende Interferenz darstellen wenn 15N16O2- Sekundärionen zur Messung des nitratspezifischen schweren Stickstoffs eingesetzt werden sollen und Natrium und Kohlenstoff im selben Feinstaubpartikel als interne Mischung vorliegt oder die natriumhaltige Probe auf einem kohlenstoffhaltigen Substrat abgelegt wurde. Selbst wenn, wie im Fall von KNO3, keine derartige Interferenz vorliegt, führt eine interne Mischung mit Kohlenstoff im selben Feinstaubpartikel zu einer matrixspezifischen instrumentellen Massenfraktionierung die mit der folgenden Gleichung beschrieben werden kann: 15Nbias = (101 ± 4) ∙ f − (101 ± 3) ‰, mit f = 14N16O2- / (14N16O2- + 12C14N-). rnWird das 12C15N- / 12C14N- Sekundärionenverhältnis zur Messung der stabilen Stickstoffisotopenzusammensetzung verwendet, beeinflusst die Probematrix die Messungsergebnisse nicht, auch wenn Stickstoff und Kohlenstoff in den Feinstaubpartikeln in variablen N/C–Verhältnissen vorliegen. Auch Interferenzen spielen keine Rolle. Um sicherzustellen, dass die Messung weiterhin spezifisch auf Nitratspezies eingeschränkt bleibt, kann eine 14N16O2- Maske bei der Datenauswertung verwendet werden. Werden die Proben auf einem kohlenstoffhaltigen, stickstofffreien Probennahmesubstrat gesammelt, erhöht dies die Signalintensität für reine Nitrat-Feinstaubpartikel.
Resumo:
BACKGROUND: Loss-of-function mutations in SCN5A, the gene encoding Na(v)1.5 Na+ channel, are associated with inherited cardiac conduction defects and Brugada syndrome, which both exhibit variable phenotypic penetrance of conduction defects. We investigated the mechanisms of this heterogeneity in a mouse model with heterozygous targeted disruption of Scn5a (Scn5a(+/-) mice) and compared our results to those obtained in patients with loss-of-function mutations in SCN5A. METHODOLOGY/PRINCIPAL FINDINGS: Based on ECG, 10-week-old Scn5a(+/-) mice were divided into 2 subgroups, one displaying severe ventricular conduction defects (QRS interval>18 ms) and one a mild phenotype (QRS< or = 18 ms; QRS in wild-type littermates: 10-18 ms). Phenotypic difference persisted with aging. At 10 weeks, the Na+ channel blocker ajmaline prolonged QRS interval similarly in both groups of Scn5a(+/-) mice. In contrast, in old mice (>53 weeks), ajmaline effect was larger in the severely affected subgroup. These data matched the clinical observations on patients with SCN5A loss-of-function mutations with either severe or mild conduction defects. Ventricular tachycardia developed in 5/10 old severely affected Scn5a(+/-) mice but not in mildly affected ones. Correspondingly, symptomatic SCN5A-mutated Brugada patients had more severe conduction defects than asymptomatic patients. Old severely affected Scn5a(+/-) mice but not mildly affected ones showed extensive cardiac fibrosis. Mildly affected Scn5a(+/-) mice had similar Na(v)1.5 mRNA but higher Na(v)1.5 protein expression, and moderately larger I(Na) current than severely affected Scn5a(+/-) mice. As a consequence, action potential upstroke velocity was more decreased in severely affected Scn5a(+/-) mice than in mildly affected ones. CONCLUSIONS: Scn5a(+/-) mice show similar phenotypic heterogeneity as SCN5A-mutated patients. In Scn5a(+/-) mice, phenotype severity correlates with wild-type Na(v)1.5 protein expression.