906 resultados para Classical measurement error model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The construction of a reliable, practically useful prediction rule for future response is heavily dependent on the "adequacy" of the fitted regression model. In this article, we consider the absolute prediction error, the expected value of the absolute difference between the future and predicted responses, as the model evaluation criterion. This prediction error is easier to interpret than the average squared error and is equivalent to the mis-classification error for the binary outcome. We show that the distributions of the apparent error and its cross-validation counterparts are approximately normal even under a misspecified fitted model. When the prediction rule is "unsmooth", the variance of the above normal distribution can be estimated well via a perturbation-resampling method. We also show how to approximate the distribution of the difference of the estimated prediction errors from two competing models. With two real examples, we demonstrate that the resulting interval estimates for prediction errors provide much more information about model adequacy than the point estimates alone.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Classical swine fever (CSF) outbreaks can cause enormous losses in naïve pig populations. How to best minimize the economic damage and number of culled animals caused by CSF is therefore an important research area. The baseline CSF control strategy in the European Union and Switzerland consists of culling all animals in infected herds, movement restrictions for animals, material and people within a given distance to the infected herd and epidemiological tracing of transmission contacts. Additional disease control measures such as pre-emptive culling or vaccination have been recommended based on the results from several simulation models; however, these models were parameterized for areas with high animal densities. The objective of this study was to explore whether pre-emptive culling and emergency vaccination should also be recommended in low- to moderate-density areas such as Switzerland. Additionally, we studied the influence of initial outbreak conditions on outbreak severity to improve the efficiency of disease prevention and surveillance. A spatial, stochastic, individual-animal-based simulation model using all registered Swiss pig premises in 2009 (n=9770) was implemented to quantify these relationships. The model simulates within-herd and between-herd transmission (direct and indirect contacts and local area spread). By varying the four parameters (a) control measures, (b) index herd type (breeding, fattening, weaning or mixed herd), (c) detection delay for secondary cases during an outbreak and (d) contact tracing probability, 112 distinct scenarios were simulated. To assess the impact of scenarios on outbreak severity, daily transmission rates were compared between scenarios. Compared with the baseline strategy (stamping out and movement restrictions) vaccination and pre-emptive culling neither reduced outbreak size nor duration. Outbreaks starting in a herd with weaning piglets or fattening pigs caused higher losses regarding to the number of culled premises and were longer lasting than those starting in the two other index herd types. Similarly, larger transmission rates were estimated for these index herd type outbreaks. A longer detection delay resulted in more culled premises and longer duration and better transmission tracing increased the number of short outbreaks. Based on the simulation results, baseline control strategies seem sufficient to control CSF in low-medium animal-dense areas. Early detection of outbreaks is crucial and risk-based surveillance should be focused on weaning piglet and fattening pig premises.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Discrepancies in finite-element model predictions of bone strength may be attributed to the simplified modeling of bone as an isotropic structure due to the resolution limitations of clinical-level Computed Tomography (CT) data. The aim of this study is to calculate the preferential orientations of bone (the principal directions) and the extent to which bone is deposited more in one direction compared to another (degree of anisotropy). Using 100 femoral trabecular samples, the principal directions and degree of anisotropy were calculated with a Gradient Structure Tensor (GST) and a Sobel Structure Tensor (SST) using clinical-level CT. The results were compared against those calculated with the gold standard Mean-Intercept-Length (MIL) fabric tensor using micro-CT. There was no significant difference between the GST and SST in the calculation of the main principal direction (median error=28°), and the error was inversely correlated to the degree of transverse isotropy (r=−0.34, p<0.01). The degree of anisotropy measured using the structure tensors was weakly correlated with the MIL-based measurements (r=0.2, p<0.001). Combining the principal directions with the degree of anisotropy resulted in a significant increase in the correlation of the tensor distributions (r=0.79, p<0.001). Both structure tensors were robust against simulated noise, kernel sizes, and bone volume fraction. We recommend the use of the GST because of its computational efficiency and ease of implementation. This methodology has the promise to predict the structural anisotropy of bone in areas with a high degree of anisotropy, and may improve the in vivo characterization of bone.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Experience with anidulafungin against Candida krusei is limited. Immunosuppressed mice were injected with 1.3 x 10(7) to 1.5 x 10(7) CFU of C. krusei. Animals were treated with saline, 40 mg/kg fluconazole, 1 mg/kg amphotericin B, or 10 and 20 mg/kg anidulafungin for 5 days. Anidulafungin improved survival and significantly reduced the number of CFU/g in kidneys and serum beta-glucan levels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present an independent calibration model for the determination of biogenic silica (BSi) in sediments, developed from analysis of synthetic sediment mixtures and application of Fourier transform infrared spectroscopy (FTIRS) and partial least squares regression (PLSR) modeling. In contrast to current FTIRS applications for quantifying BSi, this new calibration is independent from conventional wet-chemical techniques and their associated measurement uncertainties. This approach also removes the need for developing internal calibrations between the two methods for individual sediments records. For the independent calibration, we produced six series of different synthetic sediment mixtures using two purified diatom extracts, with one extract mixed with quartz sand, calcite, 60/40 quartz/calcite and two different natural sediments, and a second extract mixed with one of the natural sediments. A total of 306 samples—51 samples per series—yielded BSi contents ranging from 0 to 100 %. The resulting PLSR calibration model between the FTIR spectral information and the defined BSi concentration of the synthetic sediment mixtures exhibits a strong cross-validated correlation ( R2cv = 0.97) and a low root-mean square error of cross-validation (RMSECV = 4.7 %). Application of the independent calibration to natural lacustrine and marine sediments yields robust BSi reconstructions. At present, the synthetic mixtures do not include the variation in organic matter that occurs in natural samples, which may explain the somewhat lower prediction accuracy of the calibration model for organic-rich samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigate the transition from unitary to dissipative dynamics in the relativistic O(N) vector model with the λ(φ2)2 interaction using the nonperturbative functional renormalization group in the real-time formalism. In thermal equilibrium, the theory is characterized by two scales, the interaction range for coherent scattering of particles and the mean free path determined by the rate of incoherent collisions with excitations in the thermal medium. Their competition determines the renormalization group flow and the effective dynamics of the model. Here we quantify the dynamic properties of the model in terms of the scale-dependent dynamic critical exponent z in the limit of large temperatures and in 2≤d≤4 spatial dimensions. We contrast our results to the behavior expected at vanishing temperature and address the question of the appropriate dynamic universality class for the given microscopic theory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A portable Fourier transform spectrometer (FTS), model EM27/SUN, was deployed onboard the research vessel Polarstern to measure the column-average dry air mole fractions of carbon dioxide (XCO2) and methane (XCH4) by means of direct sunlight absorption spectrometry. We report on technical developments as well as data calibration and reduction measures required to achieve the targeted accuracy of fractions of a percent in retrieved XCO2 and XCH4 while operating the instrument under field conditions onboard the moving platform during a 6-week cruise on the Atlantic from Cape Town (South Africa, 34° S, 18° E; 5 March 2014) to Bremerhaven (Germany, 54° N, 19° E; 14 April 2014). We demonstrate that our solar tracker typically achieved a tracking precision of better than 0.05° toward the center of the sun throughout the ship cruise which facilitates accurate XCO2 and XCH4 retrievals even under harsh ambient wind conditions. We define several quality filters that screen spectra, e.g., when the field of view was partially obstructed by ship structures or when the lines-of-sight crossed the ship exhaust plume. The measurements in clean oceanic air, can be used to characterize a spurious air-mass dependency. After the campaign, deployment of the spectrometer alongside the TCCON (Total Carbon Column Observing Network) instrument at Karlsruhe, Germany, allowed for determining a calibration factor that makes the entire campaign record traceable to World Meteorological Organization (WMO) standards. Comparisons to observations of the GOSAT satellite and concentration fields modeled by the European Centre for Medium-Range Weather Forecasts (ECMWF) Copernicus Atmosphere Monitoring Service (CAMS) demonstrate that the observational setup is well suited to provide validation opportunities above the ocean and along interhemispheric transects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dating of sediment cores from the Baltic Sea has proven to be difficult due to uncertainties surrounding the 14C reservoir age and a scarcity of macrofossils suitable for dating. Here we present the results of multiple dating methods carried out on cores in the Gotland Deep area of the Baltic Sea. Particular emphasis is placed on the Littorina stage (8 ka ago to the present) of the Baltic Sea and possible changes in the 14C reservoir age of our dated samples. Three geochronological methods are used. Firstly, palaeomagnetic secular variations (PSV) are reconstructed, whereby ages are transferred to PSV features through comparison with varved lake sediment based PSV records. Secondly, lead (Pb) content and stable isotope analysis are used to identify past peaks in anthropogenic atmospheric Pb pollution. Lastly, 14C determinations were carried out on benthic foraminifera (Elphidium spec.) samples from the brackish Littorina stage of the Baltic Sea. Determinations carried out on smaller samples (as low as 4 µg C) employed an experimental, state-of-the-art method involving the direct measurement of CO2 from samples by a gas ion source without the need for a graphitisation step - the first time this method has been performed on foraminifera in an applied study. The PSV chronology, based on the uppermost Littorina stage sediments, produced ten age constraints between 6.29 and 1.29 cal ka BP, and the Pb depositional analysis produced two age constraints associated with the Medieval pollution peak. Analysis of PSV data shows that adequate directional data can be derived from both the present Littorina saline phase muds and Baltic Ice Lake stage varved glacial sediments. Ferrimagnetic iron sulphides, most likely authigenic greigite (Fe3S4), present in the intermediate Ancylus Lake freshwater stage sediments acquire a gyroremanent magnetisation during static alternating field (AF) demagnetisation, preventing the identification of a primary natural remanent magnetisation for these sediments. An inferred marine reservoir age offset (deltaR) is calculated by comparing the foraminifera 14C determinations to a PSV & Pb age model. This deltaR is found to trend towards younger values upwards in the core, possibly due to a gradual change in hydrographic conditions brought about by a reduction in marine water exchange from the open sea due to continued isostatic rebound.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We introduce a simple and innovative method to compare any two texture maps, regardless of their sizes, aspect ratios, or even masks, as long as they are both meant to be mapped onto the same 3D mesh. Our system is based on a zero-distortion 3D mesh unwrapping technique which compares two new adapted texture atlases with the same mask but different texel colors, and whose every texel covers the same area in 3D. Once these adapted atlases are created, we measure their difference with ITEM-RMSE, a slightly modified version of the standard RMSE defined for images. ITEM-RMSE is more meaningful and reliable than RMSE because it only takes into account the texels inside the mask, since they are the only ones that will actually be used during rendering. Our method is not only very useful to compare the space efficiency of different texture atlas generation algorithms, but also to quantify texture loss in compression schemes for multi-resolution textured 3D meshes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computing the modal parameters of structural systems often requires processing data from multiple non-simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors, which are fixed for all measurements, while the other sensors change their position from one setup to the next. One possibility is to process the setups separately resulting in different modal parameter estimates for each setup. Then, the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global mode shapes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a new state space model that processes all setups at once. The result is that the global mode shapes are obtained automatically, and only a value for the natural frequency and damping ratio of each mode is estimated. We also investigate the estimation of this model using maximum likelihood and the Expectation Maximization algorithm, and apply this technique to simulated and measured data corresponding to different structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Las sondas eléctricas se emplean habitualmente en la diagnosis de plasmas. La presente tesis aborda la operación de las sondas colectoras y emisoras de Langmuir en plasmas fríos de baja densidad. El estudio se ha centrado en la determinación del potencial de plasma, Vsp, mediante el potencial flotante de una sonda emisora. Esta técnica consiste en la medida del potencial de la sonda correspondiente a la condición de corriente neta igual a cero, el cual se denomina potencial flotante, VF. Este potencial se desplaza hacia el potencial del plasma según aumenta la emisión termoiónica de la sonda, hasta que se satura cerca de Vsp. Los experimentos llevados a cabo en la pluma de plasma de un motor iónico y en un plasma de descarga de glow muestran que la corriente de electrones termoiónicos es mayor que la corriente de electrones recogidos para una sonda polarizada por debajo del potencial del plasma, resultado inconsistente con la teoría tradicionalmente aceptada. Para investigar estos resultados se ha introducido el parámetro R, definido como el cociente entre la corriente de electrones emitidos y recogidos por la sonda. Este parámetro, que está relacionado con la diferencia de potencial VF - Vsp, también es útil para la descripción de los modos de operación de la sonda emisora (débil, fuerte y más allá del fuerte). Los resultados experimentales evidencian que, al contrario de lo que indica la teoría, R es mayor que la unidad. Esta discrepancia se puede solucionar introduciendo una población efectiva de electrones. Con dicha población, el nuevo modelo para la corriente total de la sonda reproduce los datos experimentales. El origen de este grupo electrónico es todavía una cuestión abierta, pero podría estar originada por una nueva estructura de potencial cerca de la sonda cuando ésta trabaja en el régimen de emisión fuerte. Para explicar dicha estructura de potencial, se propone un modelo unidimensional compuesto por un mínimo de potencial cerca de la superficie de la sonda. El análisis numérico indica que este pozo de potencial aparece para muy altas temperaturas de la sonda, reduciendo la cantidad de electrones emitidos que alcanzan el plasma y evitando así cualquier posible perturbación de éste. Los aspectos experimentales involucrados en el método del potencial flotante también se han estudiado, incluyendo cuestiones como las diferentes técnicas de obtención del VF, el cociente señal-ruido, el acoplamiento de la señal de los equipos utilizados para la obtención de las curvas I-V o la evidencia experimental de los diferentes modos de operación de la sonda. Estas evidencias empíricas se encuentran en todos los aspectos de operación de la sonda: la recolección de electrones, el potencial flotante, la precisión en las curvas I-V y la emisión electrónica. Ésta última también se estudia en la tesis, debido a que un fenómeno de super emisión tiene lugar en el régimen de emisión fuerte. En este modo de operación, las medidas experimentales indican que las corrientes termoiónicas de electrones son mayores que aquéllas predichas por la ecuación de Richardson-Dushman clásica. Por último, la diagnosis de plasmas usando sondas eléctrica bajo presencia de granos de polvo (plasmas granulares) en plasmas fríos de baja densidad también se ha estudiado, mediante la aplicación numérica de la técnica del potencial flotante de la sonda emisora en un plasma no convencional. Los resultados apuntan a que el potencial flotante de una sonda emisora se vería afectado por altas densidades de polvo o grandes partículas. ABSTRACT Electric probes are widely employed for plasma diagnostics. This dissertation concerns the operation of collecting and emissive Langmuir probes in low density cold plasmas. The study is focused on the determination of the plasma potential, Vsp, by means of the floating potential of emissive probes. This technique consists of the measurement of the probe potential, corresponding to the zero net probe current, which is the so-called floating potential, VF . This potential displaces towards the plasma potential as the thermionic electron emission increases, until it saturates near Vsp. Experiments carried out in the plasma plume of an ion thruster and in a glow discharge plasma show the thermionic electron current of the emissive Langmuir probe is higher than the collected electron current, for a probe with a bias potential below Vsp, which is inconsistent with the traditional accepted theory. To investigate these results, a parameter R is introduced as the ratio between the emitted and the collected electron current. This parameter, which is related to the difference VF - Vsp, is also useful for the description of the operation modes of the emissive Langmuir probe (weak, strong and beyond strong). The experimental results give an inconsistency of R > 1, which is solved by a modification of the theory for emissive probes, with the introduction of an effective electron population. With this new electron group, the new model for the total probe current agrees with the experimental data. The origin of this electron group remains an open question, but it might be originated by a new potential structure near the emissive probe when it operates in the strong emission regime. A simple one-dimension model composed by a minimum of potential near the probe surface is discussed for strongly emitting emissive probes. The results indicate that this complex potential structure appears for very high probe temperatures and the potential well might reduce the emitted electrons population reaching the plasma bulk. The experimental issues involved in the floating potential method are also studied, as the different obtaining techniques of VF, the signal-to-noise ratio, the signal coupling of the I-V curve measurement system or the experimental evidence of the probe operation modes. These empirical proofs concern all the probe operation aspects: the electron collection, the floating potential, the I-V curve accuracy as well as the electron emission. This last issue is also investigated in this dissertation, because a super emission takes place in the strong emission regime. In this operation mode, the experimental results indicate that the thermionic electron currents might be higher than those predicted by the classical Richardson-Dushman equation. Finally, plasma diagnosis using electric probes in the presence of dust grains (dusty plasmas) in low density cold plasmas is also addressed. The application of the floating potential technique of the emissive probe in a non-conventional complex plasma is numerically investigated, whose results point out the floating potential of the emissive probe might be shifted for high dust density or large dust particles.