8 resultados para postprocessing

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three different methods to reduce the noise power in the far-field pattern of an antenna when it is measured in a cylindrical near field system are presented and compared. The first one is based on a modal filtering while the other two are based on spatial filtering, either on an antenna plane or either on a cylinder of smaller radius. Simulated and measured results will be presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La diabetes mellitus es una enfermedad que se caracteriza por la nula o insuficiente producción de insulina, o la resistencia del organismo a la misma. La insulina es una hormona que ayuda a que la glucosa (por ejemplo la obtenida a partir de los alimentos ingeridos) llegue a los tejidos periféricos y al sistema nervioso para suministrar energía. Hoy en día la tecnología actual permite abordar el desarrollo del llamado “páncreas endocrino artificial”, que consta de un sensor continuo de glucosa subcutánea, una bomba de infusión subcutánea de insulina y un algoritmo de control en lazo cerrado que calcule la dosis de insulina requerida por el paciente en cada momento, según la medida de glucosa obtenida por el sensor y según unos objetivos. El mayor problema que presentan los sistemas de control en lazo cerrado son los retardos, el sensor de glucosa subcutánea mide la glucosa del líquido intersticial, que representa la que hubo en la sangre un tiempo atrás, por tanto, un cambio en los niveles de glucosa en la sangre, debidos por ejemplo, a una ingesta, tardaría un tiempo en ser detectado por el sensor. Además, una dosis de insulina suministrada al paciente, tarda un tiempo aproximado de 20-30 minutos para la llegar a la sangre. Para evitar trabajar en la medida que sea posible con estos retardos, se intenta predecir cuál será el nivel de glucosa en un futuro próximo, para ello se utilizara un predictor de glucosa subcutánea, con la información disponible de glucosa e insulina. El objetivo del proyecto es diseñar una metodología para estimar el valor futuro de los niveles de glucosa obtenida a partir de un sensor subcutáneo, basada en la identificación recursiva del sistema glucorregulatorio a través de modelos lineales y determinando un horizonte de predicción óptimo de trabajo y analizando la influencia de la insulina en los resultados de la predicción. Se ha implementado un predictor paramétrico basado en un modelo autorregresivo ARX que predice con mejor precisión y con menor RMSE que un predictor ZOH a un horizonte de predicción de treinta minutos. Utilizar información relativa a la insulina no tiene efecto en la predicción. El preprocesado, postprocesado y el tratamiento de la estabilidad tienen un efecto muy beneficioso en la predicción. Diabetes mellitusis a group of metabolic diseases in which a person has high blood sugar, either because the body does not produce enough insulin, or because cells do not respond to the insulin produced. The insulin is a hormone that helps the glucose to reach to outlying tissues and the nervous system to supply energy. Nowadays, the actual technology allows raising the development of the “artificial endocrine pancreas”. It involves a continuous glucose sensor, an insulin bump, and a full closed loop algorithm that calculate the insulin units required by patient at any time, according to the glucose measure obtained by the sensor and any target. The main problem of the full closed loop systems is the delays, the glucose sensor measures the glucose in the interstitial fluid that represents the glucose was in the blood some time ago. Because of this, a change in the glucose in blood would take some time to be detected by the sensor. In addition, insulin units administered by a patient take about 20-30 minutes to reach the blood stream. In order to avoid this effect, it will try to predict the glucose level in the near future. To do that, a subcutaneous glucose predictor is used to predict the future glucose with the information about insulin and glucose. The goal of the proyect is to design a method in order to estimate the future valor of glucose obtained by a subcutaneous sensor. It is based on the recursive identification of the regulatory system through the linear models, determining optimal prediction horizon and analyzing the influence of insuline on the prediction results. A parametric predictor based in ARX autoregressive model predicts with better precision and with lesser RMSE than ZOH predictor in a thirty minutes prediction horizon. Using the relative insulin information has no effect in the prediction. The preprocessing, the postprocessing and the stability treatment have many advantages in the prediction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generation of a complete damage energy and dpa cross section library up to 150 MeVbased on JEFF- 3.1.1 and suitable approximations (UPM) Postprocessing of photonuclear libraries (by CCFE) and thermal scattering  tables (by UPM) at the backend of the calculational system (CCFE/UPM)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Analysis of exhaled volatile organic compounds (VOCs) in breath is an emerging approach for cancer diagnosis, but little is known about its potential use as a biomarker for colorectal cancer (CRC). We investigated whether a combination of VOCs could distinct CRC patients from healthy volunteers. Methods: In a pilot study, we prospectively analyzed breath exhalations of 38 CRC patient and 43 healthy controls all scheduled for colonoscopy, older than 50 in the average-risk category. The samples were ionized and analyzed using a Secondary ElectroSpray Ionization (SESI) coupled with a Time-of-Flight Mass Spectrometer (SESI-MS). After a minimum of 2 hours fasting, volunteers deeply exhaled into the system. Each test requires three soft exhalations and takes less than ten minutes. No breath condensate or collection are required and VOCs masses are detected in real time, also allowing for a spirometric profile to be analyzed along with the VOCs. A new sampling system precludes ambient air from entering the system, so background contamination is reduced by an overall factor of ten. Potential confounding variables from the patient or the environment that could interfere with results were analyzed. Results: 255 VOCs, with masses ranging from 30 to 431 Dalton have been identified in the exhaled breath. Using a classification technique based on the ROC curve for each VOC, a set of 9 biomarkers discriminating the presence of CRC from healthy volunteers was obtained, showing an average recognition rate of 81.94%, a sensitivity of 87.04% and specificity of 76.85%. Conclusions: A combination of cualitative and cuantitative analysis of VOCs in the exhaled breath could be a powerful diagnostic tool for average-risk CRC population. These results should be taken with precaution, as many endogenous or exogenous contaminants could interfere as confounding variables. On-line analysis with SESI-MS is less time-consuming and doesn’t need sample preparation. We are recruiting in a new pilot study including breath cleaning procedures and spirometric analysis incorporated into the postprocessing algorithms, to better control for confounding variables.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum Key Distribution is carving its place among the tools used to secure communications. While a difficult technology, it enjoys benefits that set it apart from the rest, the most prominent is its provable security based on the laws of physics. QKD requires not only the mastering of signals at the quantum level, but also a classical processing to extract a secret-key from them. This postprocessing has been customarily studied in terms of the efficiency, a figure of merit that offers a biased view of the performance of real devices. Here we argue that it is the throughput the significant magnitude in practical QKD, specially in the case of high speed devices, where the differences are more marked, and give some examples contrasting the usual postprocessing schemes with new ones from modern coding theory. A good understanding of its implications is very important for the design of modern QKD devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum key distribution performs the trick of growing a secret key in two distant places connected by a quantum channel. The main reason is so that the legitimate users can bound the information gathered by the eavesdropper. In practical systems, whether because of finite resources or external conditions, the quantum channel is subject to fluctuations. A rate-adaptive information reconciliation protocol, which adapts to the changes in the communication channel, is then required to minimize the leakage of information in the classical postprocessing. We consider here the leakage of a rate-adaptive information reconciliation protocol. The length of the exchanged messages is larger than that of an optimal protocol; however, we prove that the min-entropy reduction is limited. The simulation results, both in the asymptotic and in the finite-length regime, show that this protocol allows to increase the amount of a distillable secret key.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The postprocessing or secret-key distillation process in quantum key distribution (QKD) mainly involves two well-known procedures: information reconciliation and privacy amplification. Information or key reconciliation has been customarily studied in terms of efficiency. During this, some information needs to be disclosed for reconciling discrepancies in the exchanged keys. The leakage of information is lower bounded by a theoretical limit, and is usually parameterized by the reconciliation efficiency (or inefficiency), i.e. the ratio of additional information disclosed over the Shannon limit. Most techniques for reconciling errors in QKD try to optimize this parameter. For instance, the well-known Cascade (probably the most widely used procedure for reconciling errors in QKD) was recently shown to have an average efficiency of 1.05 at the cost of a high interactivity (number of exchanged messages). Modern coding techniques, such as rate-adaptive low-density parity-check (LDPC) codes were also shown to achieve similar efficiency values exchanging only one message, or even better values with few interactivity and shorter block-length codes.