990 resultados para Acquisition system


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Study of K isomerism in the transfermium region around the deformed shells at N=152, Z=102, and N=162, Z=108 provides important information on the structure of heavy nuclei. Recent calculations suggest that the K-isomerism can enhance the stability of such nuclei against alpha emission and spontaneous fission. Nuclei showing K isomerism have neutron and proton orbitals with large spin projections on the symmetry axis which is due to multi quasiparticle states with aligned spins K. Quasi-particle states are formed by breaking pairs of nucleons and raising one or two nucleons in orbitals near the Fermi surface above the gap, forming high K (multi)quasi-particle states mainly at low excitation energies. Experimental examples are the recently studied two quasi-particle K isomers in 250,256-Fm, 254-No, and 270-Ds. Nuclei in this region, are produced with cross sections ranging from several nb up to µb, which are high enough for a detailed decay study. In this work, K isomerism in Sg and No isotopes was studied at the velocity filter SHIP of GSI, Darmstadt. The data were obtained by using a new data acquisition system which was developed and installed during this work. 252,254-No and 260-Sg were produced in fusion evaporation reactions of 48-Ca and 54-Cr projectiles with 206,208-Pb targets at beam energies close to the Coulomb barrier. A new K isomer was discovered in 252-No at excitation energy of 1.25 MeV, which decays to the ground state rotational band via gamma emission. It has a half-life of about 100 ms. The population of the isomeric state was about 20% of the ground state population. Detailed investigations were performed on 254-No in which two isomeric states (275 ms and 198 µs) were already discovered by R.-D. Herzberg, but due to the higher number of observed gamma decays more detailed information about the decay path of the isomers was obtained in the present work. In 260-Sg, we observed no statistically significant component with a half life different from that of the ground state. A comparison between experimental results and theoretical calculations of the single particle energies shows a fair agreement. The structure of the here studied nuclei is in particular important as single particle levels are involved which are relevant for the next shell closure expected to form the region of the shell stabilized superheavy elements at proton numbers 114, 120, or 126 and neutron number 184. K isomers, in particular, could be an ideal tool for the synthesis and study of these isotopes due to enhanced spontaneous fission life times which could result in higher alpha to spontaneous fission branching ratios and longer half lifes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Das A4-Experiment bestimmt den Beitrag der Strangequarks zu den elektromagnetischen Formfaktoren des Nukleons durch Messung der Paritätsverletzung in der elastischen Elektron-Nukleon-Streuung. Diese Messungen werden mit dem spinpolarisierten Elektronenstrahl des Mainzer Mikrotrons (MAMI) bei Strahlenergien zwischen 315 und 1508 MeV ndurchgeführt. Die Bestimmung des Strahlpolarisationsgrades ist für die Analyse der Daten unerläßlich, um die physikalische Asymmetrie aus der gemessenen paritätsverletzenden Asymmetrie extrahieren zu können. Aus diesem Grund wird von der A4-Kollaboration ein neuartiges Compton-Laserrückstreupolarimeter entwickelt, das eine zerstörungsfreie Messung der Strahlpolarisation, parallel zum laufenden Paritätsexperiment erlaubt. Um den zuverlässigen Dauerbetrieb des Polarimeters zu ermöglichen, wurde das Polarimeter im Rahmen dieser Arbeit weiterentwickelt. Das Datenerfassungssystem für Photonen- und Elektronendetektor wurde neu aufgebaut und im Hinblick auf die Verarbeitung hoher Raten optimiert. Zum Nachweis der rückgestreuten Photonen wurde ein neuartiger Detektor (LYSO) in Betrieb genommen. Darüber hinaus wurden GEANT4-Simulationen der Detektoren durchgeführt und eine Analyseumgebung für die Extraktion von Comptonasymmetrien aus den Rückstreudaten entwickelt. Das Analyseverfahren nutzt die Möglichkeit, die rückgestreuten Photonen durch koinzidente Detektion der gestreuten Elektronen energiemarkiert nachzuweisen (Tagging). Durch die von der Energiemarkierung eingeführte differentielle Energieskala wird somit eine präzise Bestimmung der Analysierstärke möglich. In der vorliegenden Arbeit wurde die Analysierstärke des Polarimeters bestimmt, so daß nun das Produkt von Elektronen- und Laserstrahlpolarisation bei einem Strahlstrom von 20 muA, parallel zum laufenden Paritätsexperiment, mit einer statistischen Genauigkeit von 1% in 24 Stunden bei 855 MeV bzw. <1% in 12 Stunden bei 1508 MeV gemessen werden kann. In Kombination mit der Bestimmung der Laserpolarisation in einer parallelen Arbeit (Y. Imai) auf 1% kann die statistische Unsicherheit der Strahlpolarisation im A4-Experiment von zuvor 5% auf nun 1,5% bei 1508MeV verringert werden. Für die Daten zur Messung der paritätsverletzenden Elektronenstreuung bei einem Viererimpulsübertrag von $Q^2=0,6 (GeV/c)^2$ beträgt die Rohasymmetrie beim derzeitigen Stand der Analyse $A_{PV}^{Roh} = ( -20,0 pm 0,9_{stat} ) cdot 10^{-6}$. Für eine Strahlpolarisation von 80% erhält man einen Gesamtfehler von $1,68 cdot 10^{-6}$ für $Delta P_e/P_e = 5 %$. Als Ergebnis dieser Arbeit wird sich dieser Fehler durch Analyse der Daten des Compton-Laserrückstreupolarimeters um 29% auf $1,19 cdot 10^{-6}$ ($Delta P_e/P_e = 1,5 %$) verringern lassen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The quench characteristics of second generation (2 G) YBCO Coated Conductor (CC) tapes are of fundamental importance for the design and safe operation of superconducting cables and magnets based on this material. Their ability to transport high current densities at high temperature, up to 77 K, and at very high fields, over 20 T, together with the increasing knowledge in their manufacturing, which is reducing their cost, are pushing the use of this innovative material in numerous system applications, from high field magnets for research to motors and generators as well as for cables. The aim of this Ph. D. thesis is the experimental analysis and numerical simulations of quench in superconducting HTS tapes and coils. A measurements facility for the characterization of superconducting tapes and coils was designed, assembled and tested. The facility consist of a cryostat, a cryocooler, a vacuum system, resistive and superconducting current leads and signal feedthrough. Moreover, the data acquisition system and the software for critical current and quench measurements were developed. A 2D model was developed using the finite element code COMSOL Multiphysics R . The problem of modeling the high aspect ratio of the tape is tackled by multiplying the tape thickness by a constant factor, compensating the heat and electrical balance equations by introducing a material anisotropy. The model was then validated both with the results of a 1D quench model based on a non-linear electric circuit coupled to a thermal model of the tape, to literature measurements and to critical current and quench measurements made in the cryogenic facility. Finally the model was extended to the study of coils and windings with the definition of the tape and stack homogenized properties. The procedure allows the definition of a multi-scale hierarchical model, able to simulate the windings with different degrees of detail.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a CMOS Amplifier with High Common Mode rejection designed in UMC 130nm technology. The goal is to achieve a high amplification factor for a wide range of biological signals (with frequencies in the range of 10Hz-1KHz) and to reject the common-mode noise signal. It is here presented a Data Acquisition System, composed of a Delta-Sigma-like Modulator and an antenna, that is the core of a portable low-complexity radio system; the amplifier is designed in order to interface the data acquisition system with a sensor that acquires the electrical signal. The Modulator asynchronously acquires and samples human muscle activity, by sending a Quasi-Digital pattern that encodes the acquired signal. There is only a minor loss of information translating the muscle activity using this pattern, compared to an encoding technique which uses astandard digital signal via Impulse-Radio Ultra-Wide Band (IR-UWB). The biological signals, needed for Electromyographic analysis, have an amplitude of 10-100μV and need to be highly amplified and separated from the overwhelming 50mV common mode noise signal. Various tests of the firmness of the concept are presented, as well the proof that the design works even with different sensors, such as Radiation measurement for Dosimetry studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

STUDY DESIGN Single centre retrospective study of prospectively collected data, nested within the Eurospine Spine Tango data acquisition system. OBJECTIVE The aim of this study was to assess the patient-rated outcome and complication rates associated with lumbar fusion procedures in three different age groups. SUMMARY OF BACKGROUND DATA There is a general reluctance to consider spinal fusion procedures in elderly patients due to the increased likelihood of complications. METHODS Before and at 3, 12, and 24 months after surgery, patients completed the multidimensional Core Outcome Measures Index (COMI). At the 3-, 12-, and 24-month follow-ups they also rated the Global Treatment Outcome (GTO) and their satisfaction with care. Patients were divided into three age groups: younger (≥50y < 65y; n = 317), older (≥65y < 80y; n = 350), and geriatric (≥ 80y; n = 40). RESULTS 707 consecutive patients were included. The preoperative comorbidity status differed significantly (p < 0.0001) between the age groups, with the highest scores in the geriatric group. Medical complications during surgery were lower in the younger age group (7%) than in the older (13.4%; p = 0.006) and geriatric groups (17.5%; p = 0.007); surgical complications tended to be higher in the elderly group (younger, 6.3%; older, 6.0%; geriatric, 15.0%; p = 0.09). There were no significant group differences (p > 0.05) for the scores on any of the COMI domains, GTO, or patient-rated satisfaction at either 3-, 12-, and 24-months follow-up. CONCLUSIONS Despite greater comorbidity and complication rates in geriatric patients, the patient-rated outcome was as good in the elderly as it was in younger age groups up to two years after surgery. These data indicate that geriatric age needs careful consideration of associated risks but is not per se a contraindication for fusion for lumbar degenerative disease. LEVEL OF EVIDENCE 4.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Next generation PET scanners should fulfill very high requirements in terms of spatial, energy and timing resolution. Modern scanner performances are inherently limited by the use of standard photomultiplier tubes. The use of Silicon Photomultipliers (SiPMs) is proposed for the construction of a 4D-PET module of 4.8×4.8 cm2 aimed to replace the standard PMT based PET block detector. The module will be based on a LYSO continuous crystal read on two faces by Silicon Photomultipliers. A high granularity detection surface made by SiPM matrices of 1.5 mm pitch will be used for the x–y photon hit position determination with submillimetric accuracy, while a low granularity surface constituted by 16 mm2 SiPM pixels will provide the fast timing information (t) that will be used to implement the Time of Flight technique (TOF). The spatial information collected by the two detector layers will be combined in order to measure the Depth of Interaction (DOI) of each event (z). The use of large area multi-pixel Silicon Photomultiplier (SiPM) detectors requires the development of a multichannel Data Acquisition system (DAQ) as well as of a dedicated front-end in order not to degrade the intrinsic detector capabilities and to manage many channels. The paper describes the progress made on the development of the proof of principle module under construction at the University of Pisa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The crop diseases sometimes are related to the irradiance that the crop receives. When an experiment requires the measurement of the irradiance, usually it results in an expensive data acquisition system. If it is necessary to check many test points, the use of traditional sensors will increase the cost of the experiment. By using low cost sensors based in the photovoltaic effect, it is possible to perform a precise test of irradiance with a reduced price. This work presents an experiment performed in Ademuz (Valencia, Spain) during September of 2011 to check the validity of low cost sensors based on solar cells.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La Organización Mundial de la Salud (OMS) prevé que para el año 2020, el Daño Cerebral Adquirido (DCA) estará entre las 10 causas más comunes de discapacidad. Estas lesiones, dadas sus consecuencias físicas, sensoriales, cognitivas, emocionales y socioeconómicas, cambian dramáticamente la vida de los pacientes y sus familias. Las nuevas técnicas de intervención precoz y el desarrollo de la medicina intensiva en la atención al DCA han mejorado notablemente la probabilidad de supervivencia. Sin embargo, hoy por hoy, las lesiones cerebrales no tienen ningún tratamiento quirúrgico que tenga por objetivo restablecer la funcionalidad perdida, sino que las terapias rehabilitadoras se dirigen hacia la compensación de los déficits producidos. Uno de los objetivos principales de la neurorrehabilitación es, por tanto, dotar al paciente de la capacidad necesaria para ejecutar las Actividades de Vida Diaria (AVDs) necesarias para desarrollar una vida independiente, siendo fundamentales aquellas en las que la Extremidad Superior (ES) está directamente implicada, dada su gran importancia a la hora de la manipulación de objetos. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma centrado en ofrecer una práctica personalizada, monitorizada y ubicua con una valoración continua de la eficacia y de la eficiencia de los procedimientos y con capacidad de generar conocimientos que impulsen la ruptura del paradigma de actual. Los nuevos objetivos consistirán en minimizar el impacto de las enfermedades que afectan a la capacidad funcional de las personas, disminuir el tiempo de incapacidad y permitir una gestión más eficiente de los recursos. Estos objetivos clínicos, de gran impacto socio-económico, sólo pueden alcanzarse desde una apuesta decidida en nuevas tecnologías, metodologías y algoritmos capaces de ocasionar la ruptura tecnológica necesaria que permita superar las barreras que hasta el momento han impedido la penetración tecnológica en el campo de la rehabilitación de manera universal. De esta forma, los trabajos y resultados alcanzados en la Tesis son los siguientes: 1. Modelado de AVDs: como paso previo a la incorporación de ayudas tecnológicas al proceso rehabilitador, se hace necesaria una primera fase de modelado y formalización del conocimiento asociado a la ejecución de las actividades que se realizan como parte de la terapia. En particular, las tareas más complejas y a su vez con mayor repercusión terapéutica son las AVDs, cuya formalización permitirá disponer de modelos de movimiento sanos que actuarán de referencia para futuros desarrollos tecnológicos dirigidos a personas con DCA. Siguiendo una metodología basada en diagramas de estados UML se han modelado las AVDs 'servir agua de una jarra' y 'coger un botella' 2. Monitorización ubícua del movimiento de la ES: se ha diseñado, desarrollado y validado un sistema de adquisición de movimiento basado en tecnología inercial que mejora las limitaciones de los dispositivos comerciales actuales (coste muy elevado e incapacidad para trabajar en entornos no controlados); los altos coeficientes de correlación y los bajos niveles de error obtenidos en los corregistros llevados a cabo con el sistema comercial BTS SMART-D demuestran la alta precisión del sistema. También se ha realizado un trabajo de investigación exploratorio de un sistema de captura de movimiento de coste muy reducido basado en visión estereoscópica, habiéndose detectado los puntos clave donde se hace necesario incidir desde un punto de vista tecnológico para su incorporación en un entorno real 3. Resolución del Problema Cinemático Inverso (PCI): se ha diseñado, desarrollado y validado una solución al PCI cuando el manipulador se corresponde con una ES humana estudiándose 2 posibles alternativas, una basada en la utilización de un Perceptrón Multicapa (PMC) y otra basada en sistemas Artificial Neuro-Fuzzy Inference Systems (ANFIS). La validación, llevada a cabo utilizando información relativa a los modelos disponibles de AVDs, indica que una solución basada en un PMC con 3 neuronas en la capa de entrada, una capa oculta también de 3 neuronas y una capa de salida con tantas neuronas como Grados de Libertad (GdLs) tenga el modelo de la ES, proporciona resultados, tanto de precisión como de tiempo de cálculo, que la hacen idónea para trabajar en sistemas con requisitos de tiempo real 4. Control inteligente assisted-as-needed: se ha diseñado, desarrollado y validado un algoritmo de control assisted-as-needed para una ortesis robótica con capacidades de actuación anticipatoria de la que existe un prototipo implementado en la actualidad. Los resultados obtenidos demuestran cómo el sistema es capaz de adaptarse al perfil disfuncional del paciente activando la ayuda en instantes anteriores a la ocurrencia de movimientos incorrectos. Esta estrategia implica un aumento en la participación del paciente y, por tanto, en su actividad muscular, fomentándose los procesos la plasticidad cerebral responsables del reaprendizaje o readaptación motora 5. Simuladores robóticos para planificación: se propone la utilización de un simulador robótico assisted-as-needed como herramienta de planificación de sesiones de rehabilitación personalizadas y con un objetivo clínico marcado en las que interviene una ortesis robotizada. Los resultados obtenidos evidencian como, tras la ejecución de ciertos algoritmos sencillos, es posible seleccionar automáticamente una configuración para el algoritmo de control assisted-as-needed que consigue que la ortesis se adapte a los criterios establecidos desde un punto de vista clínico en función del paciente estudiado. Estos resultados invitan a profundizar en el desarrollo de algoritmos más avanzados de selección de parámetros a partir de baterías de simulaciones Estos trabajos han servido para corroborar las hipótesis de investigación planteadas al inicio de la misma, permitiendo, asimismo, la apertura de nuevas líneas de investigación. Summary The World Health Organization (WHO) predicts that by the year 2020, Acquired Brain Injury (ABI) will be among the ten most common ailments. These injuries dramatically change the life of the patients and their families due to their physical, sensory, cognitive, emotional and socio-economic consequences. New techniques of early intervention and the development of intensive ABI care have noticeably improved the survival rate. However, in spite of these advances, brain injuries still have no surgical or pharmacological treatment to re-establish the lost functions. Neurorehabilitation therapies address this problem by restoring, minimizing or compensating the functional alterations in a person disabled because of a nervous system injury. One of the main objectives of Neurorehabilitation is to provide patients with the capacity to perform specific Activities of the Daily Life (ADL) required for an independent life, especially those in which the Upper Limb (UL) is directly involved due to its great importance in manipulating objects within the patients' environment. The incorporation of new technological aids to the neurorehabilitation process tries to reach a new paradigm focused on offering a personalized, monitored and ubiquitous practise with continuous assessment of both the efficacy and the efficiency of the procedures and with the capacity of generating new knowledge. New targets will be to minimize the impact of the sicknesses affecting the functional capabilitiies of the subjects, to decrease the time of the physical handicap and to allow a more efficient resources handling. These targets, of a great socio-economic impact, can only be achieved by means of new technologies and algorithms able to provoke the technological break needed to beat the barriers that are stopping the universal penetration of the technology in the field of rehabilitation. In this way, this PhD Thesis has achieved the following results: 1. ADL Modeling: as a previous step to the incorporation of technological aids to the neurorehabilitation process, it is necessary a first modelling and formalization phase of the knowledge associated to the execution of the activities that are performed as a part of the therapy. In particular, the most complex and therapeutically relevant tasks are the ADLs, whose formalization will produce healthy motion models to be used as a reference for future technological developments. Following a methodology based on UML state-chart diagrams, the ADLs 'serving water from a jar' and 'picking up a bottle' have been modelled 2. Ubiquitous monitoring of the UL movement: it has been designed, developed and validated a motion acquisition system based on inertial technology that improves the limitations of the current devices (high monetary cost and inability of working within uncontrolled environments); the high correlation coefficients and the low error levels obtained throughout several co-registration sessions with the commercial sys- tem BTS SMART-D show the high precision of the system. Besides an exploration of a very low cost stereoscopic vision-based motion capture system has been carried out and the key points where it is necessary to insist from a technological point of view have been detected 3. Inverse Kinematics (IK) problem solving: a solution to the IK problem has been proposed for a manipulator that corresponds to a human UL. This solution has been faced by means of two different alternatives, one based on a Mulilayer Perceptron (MLP) and another based on Artificial Neuro-Fuzzy Inference Systems (ANFIS). The validation of these solutions, carried out using the information regarding the previously generated motion models, indicate that a MLP-based solution, with an architecture consisting in 3 neurons in the input layer, one hidden layer of 3 neurons and an output layer with as many neurons as the number of Degrees of Freedom (DoFs) that the UL model has, is the one that provides the best results both in terms of precission and in terms of processing time, making in idoneous to be integrated within a system with real time restrictions 4. Assisted-as-needed intelligent control: an assisted-as-needed control algorithm with anticipatory actuation capabilities has been designed, developed and validated for a robotic orthosis of which there is an already implemented prototype. Obtained results demonstrate that the control system is able to adapt to the dysfunctional profile of the patient by triggering the assistance right before an incorrect movement is going to take place. This strategy implies an increase in the participation of the patients and in his or her muscle activity, encouraging the neural plasticity processes in charge of the motor learning 5. Planification with a robotic simulator: in this work a robotic simulator is proposed as a planification tool for personalized rehabilitation sessions under a certain clinical criterium. Obtained results indicate that, after the execution of simple parameter selection algorithms, it is possible to automatically choose a specific configuration that makes the assisted-as-needed control algorithm to adapt both to the clinical criteria and to the patient. These results invite researchers to work in the development of more complex parameter selection algorithms departing from simulation batteries Obtained results have been useful to corroborate the hypotheses set out at the beginning of this PhD Thesis. Besides, they have allowed the creation of new research lines in all the studied application fields.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo presenta un estudio sobre el funcionamiento y aplicaciones de las células de combustible de membrana tipo PEM, o de intercambio de protones, alimentadas con hidrógeno puro y oxigeno obtenido de aire comprimido. Una vez evaluado el proceso de dichas células y las variables que intervienen en el mismo, como presión, humedad y temperatura, se presenta una variedad de métodos para la instrumentación de tales variables así como métodos y sistemas para la estabilidad y control de las mismas, en torno a los valores óptimos para una mayor eficacia en el proceso. Tomando como variable principal a controlar la temperatura del proceso, y exponiendo los valores concretos en torno a 80 grados centígrados entre los que debe situarse, es realizado un modelo del proceso de calentamiento y evolución de la temperatura en función de la potencia del calentador resistivo en el dominio de la frecuencia compleja, y a su vez implementado un sistema de medición mediante sensores termopar de tipo K de respuesta casi lineal. La señal medida por los sensores es amplificada de manera diferencial mediante amplificadores de instrumentación INA2126, y es desarrollado un algoritmo de corrección de error de unión fría (error producido por la inclusión de nuevos metales del conector en el efecto termopar). Son incluidos los datos de test referentes al sistema de medición de temperatura , incluyendo las desviaciones o error respecto a los valores ideales de medida. Para la adquisición de datos y implementación de algoritmos de control, es utilizado un PC con el software Labview de National Instruments, que permite una programación intuitiva, versátil y visual, y poder realizar interfaces de usuario gráficas simples. La conexión entre el hardware de instrumentación y control de la célula y el PC se realiza mediante un interface de adquisición de datos USB NI 6800 que cuenta con un amplio número de salidas y entradas analógicas. Una vez digitalizadas las muestras de la señal medida, y corregido el error de unión fría anteriormente apuntado, es implementado en dicho software un controlador de tipo PID ( proporcional-integral-derivativo) , que se presenta como uno de los métodos más adecuados por su simplicidad de programación y su eficacia para el control de este tipo de variables. Para la evaluación del comportamiento del sistema son expuestas simulaciones mediante el software Matlab y Simulink determinando por tanto las mejores estrategias para desarrollar el control PID, así como los posibles resultados del proceso. En cuanto al sistema de calentamiento de los fluidos, es empleado un elemento resistor calentador, cuya potencia es controlada mediante un circuito electrónico compuesto por un detector de cruce por cero de la onda AC de alimentación y un sistema formado por un elemento TRIAC y su circuito de accionamiento. De manera análoga se expone el sistema de instrumentación para la presión de los gases en el circuito, variable que oscila en valores próximos a 3 atmosferas, para ello es empleado un sensor de presión con salida en corriente mediante bucle 4-20 mA, y un convertidor simple corriente a tensión para la entrada al sistema de adquisición de datos. Consecuentemente se presenta el esquema y componentes necesarios para la canalización, calentamiento y humidificación de los gases empleados en el proceso así como la situación de los sensores y actuadores. Por último el trabajo expone la relación de algoritmos desarrollados y un apéndice con información relativa al software Labview. ABTRACT This document presents a study about the operation and applications of PEM fuel cells (Proton exchange membrane fuel cells), fed with pure hydrogen and oxygen obtained from compressed air. Having evaluated the process of these cells and the variables involved on it, such as pressure, humidity and temperature, there is a variety of methods for implementing their control and to set up them around optimal values for greater efficiency in the process. Taking as primary process variable the temperature, and exposing its correct values around 80 degrees centigrade, between which must be placed, is carried out a model of the heating process and the temperature evolution related with the resistive heater power on the complex frequency domain, and is implemented a measuring system with thermocouple sensor type K performing a almost linear response. The differential signal measured by the sensor is amplified through INA2126 instrumentation amplifiers, and is developed a cold junction error correction algorithm (error produced by the inclusion of additional metals of connectors on the thermocouple effect). Data from the test concerning the temperature measurement system are included , including deviations or error regarding the ideal values of measurement. For data acquisition and implementation of control algorithms, is used a PC with LabVIEW software from National Instruments, which makes programming intuitive, versatile, visual, and useful to perform simple user interfaces. The connection between the instrumentation and control hardware of the cell and the PC interface is via a USB data acquisition NI 6800 that has a large number of analog inputs and outputs. Once stored the samples of the measured signal, and correct the error noted above junction, is implemented a software controller PID (proportional-integral-derivative), which is presented as one of the best methods for their programming simplicity and effectiveness for the control of such variables. To evaluate the performance of the system are presented simulations using Matlab and Simulink software thereby determining the best strategies to develop PID control, and possible outcomes of the process. As fluid heating system, is employed a heater resistor element whose power is controlled by an electronic circuit comprising a zero crossing detector of the AC power wave and a system consisting of a Triac and its drive circuit. As made with temperature variable it is developed an instrumentation system for gas pressure in the circuit, variable ranging in values around 3 atmospheres, it is employed a pressure sensor with a current output via 4-20 mA loop, and a single current to voltage converter to adequate the input to the data acquisition system. Consequently is developed the scheme and components needed for circulation, heating and humidification of the gases used in the process as well as the location of sensors and actuators. Finally the document presents the list of algorithms and an appendix with information about Labview software.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The solar irradiation that a crop receives is directly related to the physical and biological processes that affect the crop. However, the assessment of solar irradiation poses certain problems when it must be measured through fruit inside the canopy of a tree. In such cases, it is necessary to check many test points, which usually requires an expensive data acquisition system. The use of conventional irradiance sensors increases the cost of the experiment, making them unsuitable. Nevertheless, it is still possible to perform a precise irradiance test with a reduced price by using low-cost sensors based on the photovoltaic effect. The aim of this work is to develop a low-cost sensor that permits the measurement of the irradiance inside the tree canopy. Two different technologies of solar cells were analyzed for their use in the measurement of solar irradiation levels inside tree canopies. Two data acquisition system setups were also tested and compared. Experiments were performed in Ademuz (Valencia, Spain) in September 2011 and September 2012 to check the validity of low-cost sensors based on solar cells and their associated data acquisition systems. The observed difference between solar irradiation at high and low positions was of 18.5% ± 2.58% at a 95% confidence interval. Large differences were observed between the operations of the two tested sensors. In the case of a-Si cells based mini-modules, an effect of partial shadowing was detected due to the larger size of the devices, the use of individual c-Si cells is recommended over a-Si cells based mini-modules.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim in the current work is the development of a method to characterize force sensors under sinusoidal excitations using a primary standard as the source of traceability. During this work the influence factors have been studied and a method to minimise their contributions, as well as the corrections to be performed under dynamic conditions have been established. These results will allow the realization of an adequate characterization of force sensors under sinusoidal excitations, which will be essential for its further proper use under dynamic conditions. The traceability of the sensor characterization is based in the direct definition of force as mass multiplied by acceleration. To do so, the sensor is loaded with different calibrated loads and is maint ained under different sinusoidal accelerations by means of a vibration shaker system that is able to generate accelerations up to 100 m/s2 with frequencies from 5 Hz up to 2400 Hz. The acceleration is measured by means of a laser vibrometer with traceabili ty to the units of time and length. A multiple channel data acquisition system is also required to simultaneously acquire the electrical output signals of the involved instrument in real time.