965 resultados para nonlinear least-square fit
Resumo:
We conducted a six-week investigation of the sea ice inorganic carbon system during the winter-spring transition in the Canadian Arctic Archipelago. Samples for the determination of sea ice geochemistry were collected in conjunction with physical and biological parameters as part of the 2010 Arctic-ICE (Arctic - Ice-Covered Ecosystem in a Rapidly Changing Environment) program, a sea ice-based process study in Resolute Passage, Nunavut. The goal of Arctic-ICE was to determine the physical-biological processes controlling the timing of primary production in Arctic landfast sea ice and to better understand the influence of these processes on the drawdown and release of climatically active gases. The field study was conducted from 1 May to 21 June, 2010.
Resumo:
Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (ca. 50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color-encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kfn. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from ca. 700 dB/m at 400 kHz and a power of n = 1-2 in coarse-grained sands to few decibels per meter and n ? 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.
Resumo:
Vertical permeability and sediment consolidation measurements were taken on seven whole-round drill cores from Sites 1253 (three samples), 1254 (one sample), and 1255 (three samples) drilled during Ocean Drilling Program Leg 205 in the Middle America Trench off of Costa Rica's Pacific Coast. Consolidation behavior including slopes of elastic rebound and virgin compression curves (Cc) was measured by constant rate of strain tests. Permeabilities were determined from flow-through experiments during stepped-load tests and by using coefficient of consolidation (Cv) values continuously while loading. Consolidation curves and the Casagrande method were used to determine maximum preconsolidation stress. Elastic slopes of consolidation curves ranged from 0.097 to 0.158 in pelagic sediments and 0.0075 to 0.018 in hemipelagic sediments. Cc values ranged from 1.225 to 1.427 for pelagic carbonates and 0.504 to 0.826 for hemipelagic clay-rich sediments. In samples consolidated to an axial stress of ~20 MPa, permeabilities determined by flow-through experiments ranged from a low value of 7.66 x 10**-20 m**2 in hemipelagic sediments to a maximum value of 1.03 x 10**-16 m**2 in pelagic sediments. Permeabilities calculated from Cv values in the hemipelagic sediments ranged from 4.81 x 10**-16 to 7.66 x 10**-20 m**2 for porosities 49.9%-26.1%.
Resumo:
Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (-50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color -encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kF. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from -700 dB/m at 400 kHz and a power of n=1-2 in coarse-grained sands to few decibels per meter and n :s; 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.
Resumo:
Transitionprobabilities and oscillatorstrengths of 176 spectral lines with astrophysical interest arising from 5d10ns (n = 7,8), 5d10np (n = 6,7), 5d10nd (n = 6,7), 5d105f, 5d105g, 5d10nh (n = 6,7,8), 5d96s2, and 5d96s6p configurations, and radiativelifetimes for 43 levels of PbIV, have been calculated. These values were obtained in intermediate coupling (IC) and using relativistic Hartree–Fock calculations including core-polarization effects. For the IC calculations, we use the standard method of least-square fitting from experimental energy levels by means of the Cowan computer code. The inclusion in these calculations of the 5d107p and 5d105f configurations has facilitated a complete assignment of the energy levels in the PbIV. Transitionprobabilities, oscillatorstrengths, and radiativelifetimes obtained are generally in good agreement with the experimental data.
Resumo:
This paper aims to analyze the different adjustment methods commonly used to characterize indirect metrology circular features: least square circle, minimum zone circle, maximum inscribed circle and minimum circumscribed circle. The analysis was performed from images obtained by digital optical machines. The calculation algorithms, self-developed, have been implemented in Matlab® and take into consideration as study variables: the amplitude of angular sector of the circular feature, its nominal radio and the magnification used by the optical machine. Under different conditions, it was determined the radius and circularity error of different circular standards. The comparison of the results, obtained by the different methods of adjustments used, with certified values for the standards, has allowed us to determine the accuracy of each method and its scope.
Resumo:
The objective of this study was to assess the potential of visible and near infrared spectroscopy (VIS+NIRS) combined with multivariate analysis for identifying the geographical origin of cork. The study was carried out on cork planks and natural cork stoppers from the most representative cork-producing areas in the world. Two training sets of international and national cork planks were studied. The first set comprised a total of 479 samples from Morocco, Portugal, and Spain, while the second set comprised a total of 179 samples from the Spanish regions of Andalusia, Catalonia, and Extremadura. A training set of 90 cork stoppers from Andalusia and Catalonia was also studied. Original spectroscopic data were obtained for the transverse sections of the cork planks and for the body and top of the cork stoppers by means of a 6500 Foss-NIRSystems SY II spectrophotometer using a fiber optic probe. Remote reflectance was employed in the wavelength range of 400 to 2500 nm. After analyzing the spectroscopic data, discriminant models were obtained by means of partial least square (PLS) with 70% of the samples. The best models were then validated using 30% of the remaining samples. At least 98% of the international cork plank samples and 95% of the national samples were correctly classified in the calibration and validation stage. The best model for the cork stoppers was obtained for the top of the stoppers, with at least 90% of the samples being correctly classified. The results demonstrate the potential of VIS + NIRS technology as a rapid and accurate method for predicting the geographical origin of cork plank and stoppers
Resumo:
Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.
Resumo:
In the field of dimensional metrology, the use of optical measuring machines requires the handling of a large number of measurement points, or scanning points, taken from the image of the measurand. The presence of correlation between these measurement points has a significant influence on the uncertainty of the result. The aim of this work is the development of an estimation procedure for the uncertainty of measurement in a geometrically elliptical shape, taking into account the correlation between the scanning points. These points are obtained from an image produced using a commercial flat bed scanner. The characteristic parameters of the ellipse (coordinates of the center, semi-axes and the angle of the semi-major axis with regard to the horizontal) are determined using a least squares fit and orthogonal distance regression. The uncertainty is estimated using the information from the auto-correlation function of the residuals and is propagated through the fitting algorithm according to the rules described in Evaluation of Measurement Data—Supplement 2 to the ‘Guide to the Expression of Uncertainty in Measurement’—Extension to any number of output quantities. By introducing the concept of cut-off length, it can be observed how it is possible to take into account the presence of the correlation in the estimation of uncertainty in a very simple way while avoiding underestimation.
Resumo:
El primer procesamiento estricto realizado con el software científico Bernese y contemplando las más estrictas normas de cálculo recomendadas internacionalmente, permitió obtener un campo puntual de alta exactitud, basado en la integración y estandarización de los datos de una red GPS ubicada en Costa Rica. Este procesamiento contempló un total de 119 semanas de datos diarios, es decir unos 2,3 años, desde enero del año 2009 hasta abril del año 2011, para un total de 30 estaciones GPS, de las cuales 22 están ubicadas en el territorio nacional de Costa Rica y 8 internaciones pertenecientes a la red del Sistema Geocéntrico para las Américas (SIRGAS). Las denominadas soluciones semilibres generaron, semana a semana, una red GPS con una alta exactitud interna definida por medio de los vectores entre las estaciones y las coordenadas finales de la constelación satelital. La evaluación semanal dada por la repetibilidad de las soluciones brindó en promedio errores de 1,7 mm, 1,4 mm y 5,1 mm en las componentes [n e u], confirmando una alta consistencia en estas soluciones. Aunque las soluciones semilibres poseen una alta exactitud interna, las mismas no son utilizables para fines de análisis cinemático, pues carecen de un marco de referencia. En Latinoamérica, la densificación del Marco Internacional Terrestre de Referencia (ITRF), está representado por la red de estaciones de operación continua GNSS de SIRGAS, denominada como SIRGAS-CON. Por medio de las denominadas coordenadas semanales finales de las 8 estaciones consideradas como vínculo, se refirió cada una de las 119 soluciones al marco SIRGAS. La introducción del marco de referencia SIRGAS a las soluciones semilibres produce deformaciones en estas soluciones. Las deformaciones de las soluciones semilibres son producto de las cinemática de cada una de las placas en las que se ubican las estaciones de vínculo. Luego de efectuado el amarre semanal a las coordenadas SIRGAS, se hizo una estimación de los vectores de velocidad de cada una de las estaciones, incluyendo las de amarre, cuyos valores de velocidad se conocen con una alta exactitud. Para la determinación de las velocidades de las estaciones costarricenses, se programó una rutina en ambiente MatLab, basada en una ajuste por mínimos cuadrados. Los valores obtenidos en el marco de este proyecto en comparación con los valores oficiales, brindaron diferencias promedio del orden de los 0,06 cm/a, -0,08 cm/a y -0,10 cm/a respectivamente para las coordenadas [X Y Z]. De esta manera se logró determinar las coordenadas geocéntricas [X Y Z]T y sus variaciones temporales [vX vY vZ]T para el conjunto de 22 estaciones GPS de Costa Rica, dentro del datum IGS05, época de referencia 2010,5. Aunque se logró una alta exactitud en los vectores de coordenadas geocéntricas de las 22 estaciones, para algunas de las estaciones el cálculo de las velocidades no fue representativo debido al relativo corto tiempo (menos de un año) de archivos de datos. Bajo esta premisa, se excluyeron las ocho estaciones ubicadas al sur de país. Esto implicó hacer una estimación del campo local de velocidades con solamente veinte estaciones nacionales más tres estaciones en Panamá y una en Nicaragua. El algoritmo usado fue el denominado Colocación por Mínimos Cuadrados, el cual permite la estimación o interpolación de datos a partir de datos efectivamente conocidos, el cual fue programado mediante una rutina en ambiente MatLab. El campo resultante se estimó con una resolución de 30' X 30' y es altamente constante, con una velocidad resultante promedio de 2,58 cm/a en una dirección de 40,8° en dirección noreste. Este campo fue validado con base en los datos del modelo VEMOS2009, recomendado por SIRGAS. Las diferencias de velocidad promedio para las estaciones usadas como insumo para el cálculo del campo fueron del orden los +0,63 cm/a y +0,22 cm/a para los valores de velocidad en latitud y longitud, lo que supone una buena determinación de los valores de velocidad y de la estimación de la función de covarianza empírica, necesaria para la aplicación del método de colocación. Además, la grilla usada como base para la interpolación brindó diferencias del orden de -0,62 cm/a y -0,12 cm/a para latitud y longitud. Adicionalmente los resultados de este trabajo fueron usados como insumo para hacer una aproximación en la definición del límite del llamado Bloque de Panamá dentro del territorio nacional de Costa Rica. El cálculo de las componentes del Polo de Euler por medio de una rutina programa en ambiente MatLab y aplicado a diferentes combinaciones de puntos no brindó mayores aportes a la definición física de este límite. La estrategia lo que confirmó fue simplemente la diferencia en la dirección de todos los vectores velocidad y no permitió reveló revelar con mayor detalle una ubicación de esta zona dentro del territorio nacional de Costa Rica. ABSTRACT The first strict processing performed with the Bernese scientific software and contemplating the highest standards internationally recommended calculation, yielded a precise field of high accuracy, based on the integration and standardization of data from a GPS network located in Costa Rica. This processing watched a total of 119 weeks of daily data, is about 2.3 years from January 2009 to April 2011, for a total of 30 GPS stations, of which 22 are located in the country of Costa Rica and 8 hospitalizations within the network of Geocentric System for the Americas (SIRGAS). The semi-free solutions generated, every week a GPS network with high internal accuracy defined by vectors between stations and the final coordinates of the satellite constellation. The weekly evaluation given by repeatability of the solutions provided in average errors of 1.7 mm 1.4 mm and 5.1 mm in the components [n e u], confirming a high consistency in these solutions. Although semi-free solutions have a high internal accuracy, they are not used for purposes of kinematic analysis, because they lack a reference frame. In Latin America, the densification of the International Terrestrial Reference Frame (ITRF), is represented by a network of continuously operating GNSS stations SIRGAS, known as SIRGAS-CON. Through weekly final coordinates of the 8 stations considered as a link, described each of the solutions to the frame 119 SIRGAS. The introduction of the frame SIRGAS to semi-free solutions generates deformations. The deformations of the semi-free solutions are products of the kinematics of each of the plates in which link stations are located. After SIRGAS weekly link to SIRGAS frame, an estimate of the velocity vectors of each of the stations was done. The velocity vectors for each SIRGAS stations are known with high accuracy. For this calculation routine in MatLab environment, based on a least squares fit was scheduled. The values obtained compared to the official values, gave average differences of the order of 0.06 cm/yr, -0.08 cm/yr and -0.10 cm/yr respectively for the coordinates [XYZ]. Thus was possible to determine the geocentric coordinates [XYZ]T and its temporal variations [vX vY vZ]T for the set of 22 GPS stations of Costa Rica, within IGS05 datum, reference epoch 2010.5. The high accuracy vector for geocentric coordinates was obtained, however for some stations the velocity vectors was not representative because of the relatively short time (less than one year) of data files. Under this premise, the eight stations located in the south of the country were excluded. This involved an estimate of the local velocity field with only twenty national stations plus three stations in Panama and Nicaragua. The algorithm used was Least Squares Collocation, which allows the estimation and interpolation of data from known data effectively. The algorithm was programmed with MatLab. The resulting field was estimated with a resolution of 30' X 30' and is highly consistent with a resulting average speed of 2.58 cm/y in a direction of 40.8° to the northeast. This field was validated based on the model data VEMOS2009 recommended by SIRGAS. The differences in average velocity for the stations used as input for the calculation of the field were of the order of +0.63 cm/yr, +0.22 cm/yr for the velocity values in latitude and longitude, which is a good determination velocity values and estimating the empirical covariance function necessary for implementing the method of application. Furthermore, the grid used as the basis for interpolation provided differences of about -0.62 cm/yr, -0.12 cm/yr to latitude and longitude. Additionally, the results of this investigation were used as input to an approach in defining the boundary of Panama called block within the country of Costa Rica. The calculation of the components of the Euler pole through a routine program in MatLab and applied to different combinations of points gave no further contributions to the physical definition of this limit. The strategy was simply confirming the difference in the direction of all the velocity vectors and not allowed to reveal more detail revealed a location of this area within the country of Costa Rica.
Resumo:
El núcleo fundamental de esta tesis doctoral es un modelo teórico de la interacción de la luz con un tipo particular de biosensor óptico. Este biosensor se compone de dos regiones: en la región inferior puede haber capas de materiales con diferentes espesores y propiedades ópticas, apiladas horizontalmente; en la zona superior, sobre la que incide directamente el haz de luz, puede haber estructuras que hacen que las propiedades ópticas cambien tanto en el plano horizontal como en la dirección vertical. Estos biosensores responden ópticamente de forma diferente al ser iluminados dependiendo de que su superficie externa esté, en mayor o menor medida, recubierta con diferentes tipos de material biológico. En esta tesis se define un modelo analítico aproximado que permite simular la respuesta óptica de biosensores con estructuras en su región más externa. Una vez comprobada la validez práctica del modelo mediante comparación con medidas experimentales, éste se utiliza en el diseño de biosensores de rendimiento óptimo y en la definición de nuevas técnicas de interrogación óptica. En particular, el sistema de transducción IROP (Increased Relative Optical Power), basado en el efecto que produce la presencia de material biológico, en la potencia total reflejada por la celda biosensora en determinados intervalos espectrales, es uno de los sistemas que ha sido patentado y es objeto de desarrollo por la empresa de base tecnológica BIOD [www.biod.es/], estando ya disponibles en este momento varios dispositivos de diagnóstico basados en esta idea. Los dispositivos basados en este sistema de transducción han demostrado su eficiencia en la detección de proteínas y agentes infecciosos como los rotavirus y el virus del dengue. Finalmente, el modelo teórico desarrollado se utiliza para caracterizar las propiedades ópticas de algunos de los materiales de los que se fabrican los biosensores, así como las de las capas de material biológico formadas en las diferentes fases de un inmunoensayo. Los parámetros ópticos de las capas mencionadas se obtienen mediante el método general de ajuste por mínimos cuadrados a las curvas experimentales obtenidas en los inmunoensayos. ABSTRACT The core of this thesis is the theoretical modeling of the interaction of light with a particular type of optical biosensor. This biosensor consists of two parts: in the lower region may have layers of materials with different thicknesses and optical properties, stacked horizontally; at the top, on which directly affects the light beam, there may be structures that make optical properties change in both, the horizontal and in the vertical direction. These biosensors optically respond differently when illuminated depending on its external surface is greater or lesser extent, coated with different types of biological material. In this thesis an approximate analytical model to simulate the optical response of biosensors with structures in its outer region is defined. After verifying the practical validity of the model by comparison with experimental measurements, it is used in the design of biosensors with optimal performance and the definition of new optical interrogation techniques. In particular, the transduction system IROP (Increased Relative Optical Power) based on the effect of the presence of biological material in the total power reflected from the biosensor cell in certain spectral ranges, has been patented and is under development by the startup company BIOD [www.biod.es/], being already available at this time, several diagnostic devices based on this idea. Devices based on this transduction system have proven their efficiency in detecting proteins and infectious agents such as rotavirus and virus of dengue. Finally, the developed theoretical model is used to characterize the optical properties of some of the materials from which biosensors are fabricated, as well as the optical properties of the biological material layers formed at different stages of an immunoassay. The optical parameters of the layers above are obtained by the general method of least squares fit to the experimental curves obtained in immunoassays.
Resumo:
The relationship between the chemical composition and the multispectral reflectance values of chromite in the VNIR (Visible and Near-Infra-Red) realm is tested and mathematically analysed. Statisticaltools as Pearson's correlation coefficients, linear stepwise regression analysis and least-square adjustments are applied to two populations of data obtained from 14 selected samples 01 chromite multielemental microprobe analysis and multispectral reflectance values (400-1 000 nm). Results show that both data sets correlate, and suggest that the VNIR reflectance spectra can be used as a tool to determine the chemical composition of chromites.
Resumo:
A otimização de sistemas do tipo Ti-Si-X requer que os sistemas binários estejam constantemente atualizados. O sistema Ti-Si foi investigado experimentalmente desde a década de 50 e poucos estudos usaram os dados experimentais para calcular o diagrama de fases Ti-Si usando modelamento termodinâmico. A otimização mais recente do sistema Ti-Si foi realizada em 1998, descrevendo a fase Ti5Si3 como um intermetálico não estequiométrico contendo três sub-redes e mostrando a presença da fase intermetálica estequiométrica Ti3Si. Dada a recente disputa sobre a cinética de precipitação e a estabilidade das fases Ti3Si e Ti5Si3 nos sistemas Ti-Si e Ti-Si-X, o canto rico em titânio do sistema Ti-Si (estável e metaestável) foi otimizado no presente trabalho. Os limites de estabilidade de fases, os valores dos erros pelo método dos mínimos quadrados do procedimento de otimização e os desvios padrões relativos das variáveis calculadas foram discutidos para inspirar a realização de mais trabalhos experimentais para investigar as reações eutetóides estáveis e/ou metaestáveis, ?->? + Ti3Si e ?->? + + Ti5Si3; e para melhorar cada vez mais as otimizações termodinâmicas do diagrama de fases do sistema Ti-Si.
Resumo:
In the present paper, a methodology is proposed for obtaining empirical equations describing the sound absorption characteristics of an absorbing material obtained from natural fibers, specifically from coconut. The method, which was previously applied to other materials, requires performing measurements of air-flow resistivity and of acoustic impedance for samples of the material under study. The equations that govern the acoustic behavior of the material are then derived by means of a least-squares fit of the acoustic impedance and of the propagation constant. These results can be useful since they allow the empirically obtained analytical equations to be easily incorporated in prediction and simulation models of acoustic systems for noise control that incorporate the studied materials.
Resumo:
Marine organic matter (OM) sinks from surface waters to the seafloor via the biological pump. Benthic communities, which use this sedimented OM as energy and carbon source, produce dissolved organic matter (DOM) in the process of remineralization, enriching the sediment porewater with fresh DOM compounds. We hypothesized that in the oligotrophic deep Arctic basin the molecular signal of freshly deposited primary produced OM is restricted to the surface sediment pore waters which should differ from bottom water and deeper sediment pore water in DOM composition. This study focused on: 1) the molecular composition of the DOM in sediment pore waters of the deep Eurasian Arctic basins, 2) whether the signal of marine vs. terrigenous DOM is represented by different compounds preserved in the sediment pore waters and 3) whether there is any relation between Arctic Ocean ice cover and DOM composition. Molecular data, obtained via 15 Tesla Fourier transform ion cyclotron resonance mass spectrometer, were correlated with environmental parameters by partial least square analysis. The fresher marine detrital OM signal from surface waters was limited to pore waters from < 5 cm sediment depth. The productive ice margin stations showed higher abundances of peptides, unsaturated aliphatics and saturated fatty acids formulae, indicative of fresh OM/pigments deposition, compared to northernmost stations which had stronger aromatic signals. This study contributes to the understanding of the coupling between the Arctic Ocean productivity and its depositional regime, and how it will be altered in response to sea ice retreat and increasing river runoff.