993 resultados para Instrumental analysis
Resumo:
in the everyday clinical practice. Having this in mind, the choice of a simple setup would not be enough because, even if the setup is quick and simple, the instrumental assessment would still be in addition to the daily routine. The will to overcome this limit has led to the idea of instrumenting already existing and widely used functional tests. In this way the sensor based assessment becomes an integral part of the clinical assessment. Reliable and validated signal processing methods have been successfully implemented in Personal Health Systems based on smartphone technology. At the end of this research project there is evidence that such solution can really and easily used in clinical practice in both supervised and unsupervised settings. Smartphone based solution, together or in place of dedicated wearable sensing units, can truly become a pervasive and low-cost means for providing suitable testing solutions for quantitative movement analysis with a clear clinical value, ultimately providing enhanced balance and mobility support to an aging population.
Resumo:
Food suppliers currently measure apple quality considering basic pomological descriptors. Sensory analysis is expensive, does not permit to analyse many samples, and cannot be implemented for measuring quality properties in real time. However, sensory analysis is the best way to precisely describe food eating quality, since it is able to define, measure, and explain what is really perceivable by human senses and using a language that closely reflects the consumers’ perception. On the basis of such observations, we developed a detailed protocol for apple sensory profiling by descriptive sensory analysis and instrumental measurements. The collected sensory data were validated by applying rigorous scientific criteria for sensory analysis. The method was then applied for studying sensory properties of apples and their changes in relation to different pre- and post-harvest factors affecting fruit quality, and demonstrated to be able to discriminate fruit varieties and to highlight differences in terms of sensory properties. The instrumental measurements confirmed such results. Moreover, the correlation between sensory and instrumental data was studied, and a new effective approach was defined for the reliable prediction of sensory properties by instrumental characterisation. It is therefore possible to propose the application of this sensory-instrumental tool to all the stakeholders involved in apple production and marketing, to have a reliable description of apple fruit quality.
Resumo:
Der atmosphärische Kreislauf reaktiver Stickstoffverbindungen beschäftigt sowohl die Naturwissenschaftler als auch die Politik. Dies ist insbesondere darauf zurückzuführen, dass reaktive Stickoxide die Bildung von bodennahem Ozon kontrollieren. Reaktive Stickstoffverbindungen spielen darüber hinaus als gasförmige Vorläufer von Feinstaubpartikeln eine wichtige Rolle und der Transport von reaktivem Stickstoff über lange Distanzen verändert den biogeochemischen Kohlenstoffkreislauf des Planeten, indem er entlegene Ökosysteme mit Stickstoff düngt. Die Messungen von stabilen Stickstoffisotopenverhältnissen (15N/14N) bietet ein Hilfsmittel, welches es erlaubt, die Quellen von reaktiven Stickstoffverbindungen zu identifizieren und die am Stickstoffkeislauf beteiligten Reaktionen mithilfe ihrer reaktionsspezifischen Isotopenfraktionierung genauer zu untersuchen. rnIn dieser Doktorarbeit demonstriere ich, dass es möglich ist, mit Hilfe von Nano-Sekundärionenmassenspektrometrie (NanoSIMS) verschiedene stickstoffhaltige Verbindungen, die üblicherweise in atmosphärischen Feinstaubpartikeln vorkommen, mit einer räumlichen Auflösung von weniger als einem Mikrometer zu analysieren und zu identifizieren. Die Unterscheidung verschiedener stickstoffhaltiger Verbindungen erfolgt anhand der relativen Signalintensitäten der positiven und negativen Sekundärionensignale, die beobachtet werden, wenn die Feinstaubproben mit einem Cs+ oder O- Primärionenstrahl beschossen werden. Die Feinstaubproben können direkt auf dem Probenahmesubstrat in das Massenspektrometer eingeführt werden, ohne chemisch oder physikalisch aufbereited zu werden. Die Methode wurde Mithilfe von Nitrat, Nitrit, Ammoniumsulfat, Harnstoff, Aminosären, biologischen Feinstaubproben (Pilzsporen) und Imidazol getestet. Ich habe gezeigt, dass NO2 Sekundärionen nur beim Beschuss von Nitrat und Nitrit (Salzen) mit positiven Primärionen entstehen, während NH4+ Sekundärionen nur beim Beschuss von Aminosäuren, Harnstoff und Ammoniumsalzen mit positiven Primärionen freigesetzt werden, nicht aber beim Beschuss biologischer Proben wie z.B. Pilzsporen. CN- Sekundärionen werden beim Beschuss aller stickstoffhaltigen Verbindungen mit positiven Primärionen beobachtet, da fast alle Proben oberflächennah mit Kohlenstoffspuren kontaminiert sind. Die relative Signalintensität der CN- Sekundärionen ist bei kohlenstoffhaltigen organischen Stickstoffverbindungen am höchsten.rnDarüber hinaus habe ich gezeigt, dass an reinen Nitratsalzproben (NaNO3 und KNO3), welche auf Goldfolien aufgebracht wurden speziesspezifische stabile Stickstoffisotopenverhältnisse mithilfe des 15N16O2- / 14N16O2- - Sekundärionenverhältnisses genau und richtig gemessen werden können. Die Messgenauigkeit auf Feldern mit einer Rastergröße von 5×5 µm2 wurde anhand von Langzeitmessungen an einem hausinternen NaNO3 Standard als ± 0.6 ‰ bestimmt. Die Differenz der matrixspezifischen instrumentellen Massenfraktionierung zwischen NaNO3 und KNO3 betrug 7.1 ± 0.9 ‰. 23Na12C2- Sekundärionen können eine ernst zu nehmende Interferenz darstellen wenn 15N16O2- Sekundärionen zur Messung des nitratspezifischen schweren Stickstoffs eingesetzt werden sollen und Natrium und Kohlenstoff im selben Feinstaubpartikel als interne Mischung vorliegt oder die natriumhaltige Probe auf einem kohlenstoffhaltigen Substrat abgelegt wurde. Selbst wenn, wie im Fall von KNO3, keine derartige Interferenz vorliegt, führt eine interne Mischung mit Kohlenstoff im selben Feinstaubpartikel zu einer matrixspezifischen instrumentellen Massenfraktionierung die mit der folgenden Gleichung beschrieben werden kann: 15Nbias = (101 ± 4) ∙ f − (101 ± 3) ‰, mit f = 14N16O2- / (14N16O2- + 12C14N-). rnWird das 12C15N- / 12C14N- Sekundärionenverhältnis zur Messung der stabilen Stickstoffisotopenzusammensetzung verwendet, beeinflusst die Probematrix die Messungsergebnisse nicht, auch wenn Stickstoff und Kohlenstoff in den Feinstaubpartikeln in variablen N/C–Verhältnissen vorliegen. Auch Interferenzen spielen keine Rolle. Um sicherzustellen, dass die Messung weiterhin spezifisch auf Nitratspezies eingeschränkt bleibt, kann eine 14N16O2- Maske bei der Datenauswertung verwendet werden. Werden die Proben auf einem kohlenstoffhaltigen, stickstofffreien Probennahmesubstrat gesammelt, erhöht dies die Signalintensität für reine Nitrat-Feinstaubpartikel.
Resumo:
Instrumental daily series of temperature are often affected by inhomogeneities. Several methods are available for their correction at monthly and annual scales, whereas few exist for daily data. Here, an improved version of the higher-order moments (HOM) method, the higher-order moments for autocorrelated data (HOMAD), is proposed. HOMAD addresses the main weaknesses of HOM, namely, data autocorrelation and the subjective choice of regression parameters. Simulated series are used for the comparison of both methodologies. The results highlight and reveal that HOMAD outperforms HOM for small samples. Additionally, three daily temperature time series from stations in the eastern Mediterranean are used to show the impact of homogenization procedures on trend estimation and the assessment of extremes. HOMAD provides an improved correction of daily temperature time series and further supports the use of corrected daily temperature time series prior to climate change assessment.
Resumo:
We use long instrumental temperature series together with available field reconstructions of sea-level pressure (SLP) and three-dimensional climate model simulations to analyze relations between temperature anomalies and atmospheric circulation patterns over much of Europe and the Mediterranean for the late winter/early spring (January–April, JFMA) season. A Canonical Correlation Analysis (CCA) investigates interannual to interdecadal covariability between a new gridded SLP field reconstruction and seven long instrumental temperature series covering the past 250 years. We then present and discuss prominent atmospheric circulation patterns related to anomalous warm and cold JFMA conditions within different European areas spanning the period 1760–2007. Next, using a data assimilation technique, we link gridded SLP data with a climate model (EC-Bilt-Clio) for a better dynamical understanding of the relationship between large scale circulation and European climate. We thus present an alternative approach to reconstruct climate for the pre-instrumental period based on the assimilated model simulations. Furthermore, we present an independent method to extend the dynamic circulation analysis for anomalously cold European JFMA conditions back to the sixteenth century. To this end, we use documentary records that are spatially representative for the long instrumental records and derive, through modern analogs, large-scale SLP, surface temperature and precipitation fields. The skill of the analog method is tested in the virtual world of two three-dimensional climate simulations (ECHO-G and HadCM3). This endeavor offers new possibilities to both constrain climate model into a reconstruction mode (through the assimilation approach) and to better asses documentary data in a quantitative way.
Resumo:
The purpose of the study was to examine the effect of teacher experience on student progress and performance quality in an introductory applied lesson. Nine experienced teachers and 15 pre-service teachers taught an adult beginner to play ‘Mary Had a Little Lamb’ on a wind instrument. The lessons were videotaped for subsequent analysis of teaching behaviors and performance achievement. Following instruction, a random sample of teachers was interviewed about their perceptions of the lesson. A panel of adjudicators rated final pupil performances. No significant difference was found between pupils taught by experienced and pre-service teachers in the quality of their final performance. Systematic observation of the videotaped lessons showed that participant teachers provided relatively frequent and highly positive reinforcement during the lessons. Pupils of experienced teachers talked significantly more during the lessons than did pupils of pre-service teachers. Pre-service teachers modeled significantly more on their instruments than did experienced teachers.
Resumo:
One of the most influential statements in the anomie theory tradition has been Merton’s argument that the volume of instrumental property crime should be higher where there is a greater imbalance between the degree of commitment to monetary success goals and the degree of commitment to legitimate means of pursing such goals. Contemporary anomie theories stimulated by Merton’s perspective, most notably Messner and Rosenfeld’s institutional anomie theory, have expanded the scope conditions by emphasizing lethal criminal violence as an outcome to which anomie theory is highly relevant, and virtually all contemporary empirical studies have focused on applying the perspective to explaining spatial variation in homicide rates. In the present paper, we argue that current explications of Merton’s theory and IAT have not adequately conveyed the relevance of the core features of the anomie perspective to lethal violence. We propose an expanded anomie model in which an unbalanced pecuniary value system – the core causal variable in Merton’s theory and IAT – translates into higher levels of homicide primarily in indirect ways by increasing levels of firearm prevalence, drug market activity, and property crime, and by enhancing the degree to which these factors stimulate lethal outcomes. Using aggregate-level data collected during the mid-to-late 1970s for a sample of relatively large social aggregates within the U.S., we find a significant effect on homicide rates of an interaction term reflecting high levels of commitment to monetary success goals and low levels of commitment to legitimate means. Virtually all of this effect is accounted for by higher levels of property crime and drug market activity that occur in areas with an unbalanced pecuniary value system. Our analysis also reveals that property crime is more apt to lead to homicide under conditions of high levels of structural disadvantage. These and other findings underscore the potential value of elaborating the anomie perspective to explicitly account for lethal violence.
Resumo:
The meteorological circumstances that led to the Blizzard of March 1888 that hit New York are analysed in Version 2 of the “Twentieth Century Reanalysis” (20CR). The potential of this data set for studying historical extreme events has not yet been fully explored. A detailed analysis of 20CR data alongside other data sources (including historical instrumental data and weather maps) for historical extremes such as the March 1888 blizzard may give insights into the limitations of 20CR. We find that 20CR reproduces the circulation pattern as well as the temperature development very well. Regarding the absolute values of variables such as snow fall or minimum and maximum surface pressure, there is anunderestimation of the observed extremes, which may be due to the low spatial resolution of 20CR and the fact that only the ensemble mean is considered. Despite this drawback, the dataset allows us to gain new information due to its complete spatial and temporal coverage.
Resumo:
The Genesis mission Solar Wind Concentrator was built to enhance fluences of solar wind by an average of 20x over the 2.3 years that the mission exposed substrates to the solar wind. The Concentrator targets survived the hard landing upon return to Earth and were used to determine the isotopic composition of solar-wind—and hence solar—oxygen and nitrogen. Here we report on the flight operation of the instrument and on simulations of its performance. Concentration and fractionation patterns obtained from simulations are given for He, Li, N, O, Ne, Mg, Si, S, and Ar in SiC targets, and are compared with measured concentrations and isotope ratios for the noble gases. Carbon is also modeled for a Si target. Predicted differences in instrumental fractionation between elements are discussed. Additionally, as the Concentrator was designed only for ions ≤22 AMU, implications of analyzing elements as heavy as argon are discussed. Post-flight simulations of instrumental fractionation as a function of radial position on the targets incorporate solar-wind velocity and angular distributions measured in flight, and predict fractionation patterns for various elements and isotopes of interest. A tighter angular distribution, mostly due to better spacecraft spin stability than assumed in pre-flight modeling, results in a steeper isotopic fractionation gradient between the center and the perimeter of the targets. Using the distribution of solar-wind velocities encountered during flight, which are higher than those used in pre-flight modeling, results in elemental abundance patterns slightly less peaked at the center. Mean fractionations trend with atomic mass, with differences relative to the measured isotopes of neon of +4.1±0.9 ‰/amu for Li, between -0.4 and +2.8 ‰/amu for C, +1.9±0.7‰/amu for N, +1.3±0.4 ‰/amu for O, -7.5±0.4 ‰/amu for Mg, -8.9±0.6 ‰/amu for Si, and -22.0±0.7 ‰/amu for S (uncertainties reflect Monte Carlo statistics). The slopes of the fractionation trends depend to first order only on the relative differential mass ratio, Δ m/ m. This article and a companion paper (Reisenfeld et al. 2012, this issue) provide post-flight information necessary for the analysis of the Genesis solar wind samples, and thus serve to complement the Space Science Review volume, The Genesis Mission (v. 105, 2003).
Resumo:
This thesis project is motivated by the potential problem of using observational data to draw inferences about a causal relationship in observational epidemiology research when controlled randomization is not applicable. Instrumental variable (IV) method is one of the statistical tools to overcome this problem. Mendelian randomization study uses genetic variants as IVs in genetic association study. In this thesis, the IV method, as well as standard logistic and linear regression models, is used to investigate the causal association between risk of pancreatic cancer and the circulating levels of soluble receptor for advanced glycation end-products (sRAGE). Higher levels of serum sRAGE were found to be associated with a lower risk of pancreatic cancer in a previous observational study (255 cases and 485 controls). However, such a novel association may be biased by unknown confounding factors. In a case-control study, we aimed to use the IV approach to confirm or refute this observation in a subset of study subjects for whom the genotyping data were available (178 cases and 177 controls). Two-stage IV method using generalized method of moments-structural mean models (GMM-SMM) was conducted and the relative risk (RR) was calculated. In the first stage analysis, we found that the single nucleotide polymorphism (SNP) rs2070600 of the receptor for advanced glycation end-products (AGER) gene meets all three general assumptions for a genetic IV in examining the causal association between sRAGE and risk of pancreatic cancer. The variant allele of SNP rs2070600 of the AGER gene was associated with lower levels of sRAGE, and it was neither associated with risk of pancreatic cancer, nor with the confounding factors. It was a potential strong IV (F statistic = 29.2). However, in the second stage analysis, the GMM-SMM model failed to converge due to non- concaveness probably because of the small sample size. Therefore, the IV analysis could not support the causality of the association between serum sRAGE levels and risk of pancreatic cancer. Nevertheless, these analyses suggest that rs2070600 was a potentially good genetic IV for testing the causality between the risk of pancreatic cancer and sRAGE levels. A larger sample size is required to conduct a credible IV analysis.^
Resumo:
Non-failure analysis aims at inferring that predicate calis in a program will never fail. This type of information has many applications in functional/logic programming. It is essential for determining lower bounds on the computational cost of calis, useful in the context of program parallelization, instrumental in partial evaluation and other program transformations, and has also been used in query optimization. In this paper, we re-cast the non-failure analysis proposed by Debray et al. as an abstract interpretation, which not only allows to investígate it from a standard and well understood theoretical framework, but has also several practical advantages. It allows us to incorpórate non-failure analysis into a standard, generic abstract interpretation engine. The analysis thus benefits from the fixpoint propagation algorithm, which leads to improved information propagation. Also, the analysis takes advantage of the multi-variance of the generic engine, so that it is now able to infer sepárate non-failure information for different cali patterns. Moreover, the implementation is simpler, and allows to perform non-failure and covering analyses alongside other analyses, such as those for modes and types, in the same framework. Finally, besides the precisión improvements and the additional simplicity, our implementation (in the Ciao/CiaoPP multiparadigm programming system) also shows better efRciency.
Resumo:
Los estudios realizados hasta el momento para la determinación de la calidad de medida del instrumental geodésico han estado dirigidos, fundamentalmente, a las medidas angulares y de distancias. Sin embargo, en los últimos años se ha impuesto la tendencia generalizada de utilizar equipos GNSS (Global Navigation Satellite System) en el campo de las aplicaciones geomáticas sin que se haya establecido una metodología que permita obtener la corrección de calibración y su incertidumbre para estos equipos. La finalidad de esta Tesis es establecer los requisitos que debe satisfacer una red para ser considerada Red Patrón con trazabilidad metrológica, así como la metodología para la verificación y calibración de instrumental GNSS en redes patrón. Para ello, se ha diseñado y elaborado un procedimiento técnico de calibración de equipos GNSS en el que se han definido las contribuciones a la incertidumbre de medida. El procedimiento, que se ha aplicado en diferentes redes para distintos equipos, ha permitido obtener la incertidumbre expandida de dichos equipos siguiendo las recomendaciones de la Guide to the Expression of Uncertainty in Measurement del Joint Committee for Guides in Metrology. Asimismo, se han determinado mediante técnicas de observación por satélite las coordenadas tridimensionales de las bases que conforman las redes consideradas en la investigación, y se han desarrollado simulaciones en función de diversos valores de las desviaciones típicas experimentales de los puntos fijos que se han utilizado en el ajuste mínimo cuadrático de los vectores o líneas base. Los resultados obtenidos han puesto de manifiesto la importancia que tiene el conocimiento de las desviaciones típicas experimentales en el cálculo de incertidumbres de las coordenadas tridimensionales de las bases. Basándose en estudios y observaciones de gran calidad técnica, llevados a cabo en estas redes con anterioridad, se ha realizado un exhaustivo análisis que ha permitido determinar las condiciones que debe satisfacer una red patrón. Además, se han diseñado procedimientos técnicos de calibración que permiten calcular la incertidumbre expandida de medida de los instrumentos geodésicos que proporcionan ángulos y distancias obtenidas por métodos electromagnéticos, ya que dichos instrumentos son los que van a permitir la diseminación de la trazabilidad metrológica a las redes patrón para la verificación y calibración de los equipos GNSS. De este modo, ha sido posible la determinación de las correcciones de calibración local de equipos GNSS de alta exactitud en las redes patrón. En esta Tesis se ha obtenido la incertidumbre de la corrección de calibración mediante dos metodologías diferentes; en la primera se ha aplicado la propagación de incertidumbres, mientras que en la segunda se ha aplicado el método de Monte Carlo de simulación de variables aleatorias. El análisis de los resultados obtenidos confirma la validez de ambas metodologías para la determinación de la incertidumbre de calibración de instrumental GNSS. ABSTRACT The studies carried out so far for the determination of the quality of measurement of geodetic instruments have been aimed, primarily, to measure angles and distances. However, in recent years it has been accepted to use GNSS (Global Navigation Satellite System) equipment in the field of Geomatic applications, for data capture, without establishing a methodology that allows obtaining the calibration correction and its uncertainty. The purpose of this Thesis is to establish the requirements that a network must meet to be considered a StandardNetwork with metrological traceability, as well as the methodology for the verification and calibration of GNSS instrumental in those standard networks. To do this, a technical calibration procedure has been designed, developed and defined for GNSS equipment determining the contributions to the uncertainty of measurement. The procedure, which has been applied in different networks for different equipment, has alloweddetermining the expanded uncertainty of such equipment following the recommendations of the Guide to the Expression of Uncertainty in Measurement of the Joint Committee for Guides in Metrology. In addition, the three-dimensional coordinates of the bases which constitute the networks considered in the investigationhave been determined by satellite-based techniques. There have been several developed simulations based on different values of experimental standard deviations of the fixed points that have been used in the least squares vectors or base lines calculations. The results have shown the importance that the knowledge of experimental standard deviations has in the calculation of uncertainties of the three-dimensional coordinates of the bases. Based on high technical quality studies and observations carried out in these networks previously, it has been possible to make an exhaustive analysis that has allowed determining the requirements that a standard network must meet. In addition, technical calibration procedures have been developed to allow the uncertainty estimation of measurement carried outby geodetic instruments that provide angles and distances obtained by electromagnetic methods. These instruments provide the metrological traceability to standard networks used for verification and calibration of GNSS equipment. As a result, it has been possible the estimation of local calibration corrections for high accuracy GNSS equipment in standardnetworks. In this Thesis, the uncertainty of calibration correction has been calculated using two different methodologies: the first one by applying the law of propagation of uncertainty, while the second has applied the propagation of distributions using the Monte Carlo method. The analysis of the obtained results confirms the validity of both methodologies for estimating the calibration uncertainty of GNSS equipment.
Resumo:
Abstract interpretation has been widely used for the analysis of object-oriented languages and, more precisely, Java source and bytecode. However, while most of the existing work deals with the problem of finding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying fixpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based) fixpoint algorithms rely on relatively inefficient techniques to solve inter-procedural call graphs or are specific and tied to particular analyses. We argue that the design of an efficient fixpoint algorithm is pivotal to support the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. Also, the algorithm is parametric in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins". It is also incremental in the sense that, if desired, analysis data can be saved so that only a reduced amount of reanalysis is needed after a small program change, which can be instrumental for large programs. The algorithm is also multivariant and flowsensitive. Finally, another interesting characteristic of the algorithm is that it is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are provided and discussed with an example.
Resumo:
The consumption of melon (Cucumis melo L.) has been, until several years ago, regional, seasonal and without commercial interest. Recent commercial changes and world wide transportation have changed this situation. Melons from 3 different ripeness stages at harvest and 7 cold storage periods have been analysed by destructive and non destructive tests. Chemical, physical, mechanical (non destructive impact, compression, skin puncture and Magness- Taylor) and sensory tests were carried out in order to select the best test to assess quality and to determine the optimal ripeness stage at harvest. Analysis of variance and Principal Component Analysis were performed to study the data. The mechanical properties based on non-destructive Impact and Compression can be used to monitor cold storage evolution. They can also be used at harvest to segregate the highest ripeness stage (41 days after anthesis DAA) in relation to less ripe stages (34 and 28 DAA).Only 34 and 41 DAA reach a sensory evaluation above 50 in a scale from 0-100.