944 resultados para Data processing methods
Resumo:
In this work, a platform to the conditioning, digitizing, visualization and recording of the EMG signals was developed. After the acquisition, the analysis can be done by signal processing techniques. The platform consists of two modules witch acquire electromyography (EMG) signals by surface electrodes, limit the interest frequency band, filter the power grid interference and digitalize the signals by the analogue-to- digital converter of the modules microcontroller. Thereby, the data are sent to the computer by the USB interface by the HID specification, displayed in real-time in graphical form and stored in files. As processing resources was implemented the operations of signal absolute value, the determination of effective value (RMS), Fourier analysis, digital filter (IIR) and the adaptive filter. Platform initial tests were performed with signal of lower and upper limbs with the aim to compare the EMG signal laterality. The open platform is intended to educational activities and academic research, allowing the addition of other processing methods that the researcher want to evaluate or other required analysis.
Resumo:
Con el fin de aminorar retrasos por descementación de cualquier accesorio durante el tratamiento de Ortodoncia Lingual, se ha considerado que la fuerza de adhesión es muy importante, especialmente cuando está ubicada en las diferentes interfaces presentes entre el bracket y la resina del PAD; entre la resina del PAD y el cemento resinoso fotopolimerizable y entre este cemento y el esmalte dental. Por lo que este estudio se ha enfocado en determinar la resistencia adhesiva en la interfaz localizada entre la resina de la base del PAD y el cemento resinoso fotopolimerizable utilizando ácido fluorhídrico y óxido de aluminio como tratamiento de superficie previo a la cementación indirecta de la técnica lingual. MATERIALES Y METODOS: El tipo de estudio fue experimental "in vitro", con una muestra de 30 cuerpos de prueba hechos con resina Transbond XT, utilizando para su confección un blíster de brackets, se siguieron tres protocolos diferentes; G1 o grupo control sin ninguna preparación, G2 con aplicación de óxido de aluminio, 50 micrones durante 10 segundos en la superficie del cuerpo de prueba, G3 con aplicación de ácido fluorhídrico al 9% en la superficie del cuerpo de prueba durante 10 minutos. Previo al test de resistencia adhesiva, se realizó los cortes de precisión en cada cuerpo de prueba, obteniendo así 45 tiras de prueba, cada una de las muestras fue adherida a un porta muestra para la prueba de micro tracción, la misma que fue realizada con la máquina universal Mini-Instron modelo 5942, a una velocidad de deformación constante de 0.5 mm/min. Los datos fueron sometidos al test de Normalidad de residuos de Shapiro Wilk (p>0,05) y de LEVENE para el análisis de homogeneidad de las varianzas. La resistencia adhesiva fue comparada entre los grupos por medio del Análisis de Varianza (ANOVA) como factor único para el procesamiento de los datos. Para todos los análisis el nivel de significancia fue del 5% (p< 0,05) con un nivel de confianza del 95% (IC95%). Se consideró estadísticamente significativo valores por debajo de 0,05. RESULTADOS Y CONCLUSIONES: El resultado del test de ANOVA, reveló que el factor de tratamiento de superficie F(2,12)=2,52;p=0,12 no es significante, por lo tanto los diferentes tratamientos de superficie (óxido de aluminio y ácido fluorhídrico) utilizados son equivalentes al grupo control, indicando que no ejercen influencia de manera significativa en los valores de Resistencia Adhesiva (RA) en la preparación de la interfaz localizada entre la resina de la base del PAD y el cemento resinoso fotopolimerizable; concluyendo que se puede utilizar cualquier protocolo de tratamiento de superficie indicado en la presente investigación.
Resumo:
The only method used to date to measure dissolved nitrate concentration (NITRATE) with sensors mounted on profiling floats is based on the absorption of light at ultraviolet wavelengths by nitrate ion (Johnson and Coletti, 2002; Johnson et al., 2010; 2013; D’Ortenzio et al., 2012). Nitrate has a modest UV absorption band with a peak near 210 nm, which overlaps with the stronger absorption band of bromide, which has a peak near 200 nm. In addition, there is a much weaker absorption due to dissolved organic matter and light scattering by particles (Ogura and Hanya, 1966). The UV spectrum thus consists of three components, bromide, nitrate and a background due to organics and particles. The background also includes thermal effects on the instrument and slow drift. All of these latter effects (organics, particles, thermal effects and drift) tend to be smooth spectra that combine to form an absorption spectrum that is linear in wavelength over relatively short wavelength spans. If the light absorption spectrum is measured in the wavelength range around 217 to 240 nm (the exact range is a bit of a decision by the operator), then the nitrate concentration can be determined. Two different instruments based on the same optical principles are in use for this purpose. The In Situ Ultraviolet Spectrophotometer (ISUS) built at MBARI or at Satlantic has been mounted inside the pressure hull of a Teledyne/Webb Research APEX and NKE Provor profiling floats and the optics penetrate through the upper end cap into the water. The Satlantic Submersible Ultraviolet Nitrate Analyzer (SUNA) is placed on the outside of APEX, Provor, and Navis profiling floats in its own pressure housing and is connected to the float through an underwater cable that provides power and communications. Power, communications between the float controller and the sensor, and data processing requirements are essentially the same for both ISUS and SUNA. There are several possible algorithms that can be used for the deconvolution of nitrate concentration from the observed UV absorption spectrum (Johnson and Coletti, 2002; Arai et al., 2008; Sakamoto et al., 2009; Zielinski et al., 2011). In addition, the default algorithm that is available in Satlantic sensors is a proprietary approach, but this is not generally used on profiling floats. There are some tradeoffs in every approach. To date almost all nitrate sensors on profiling floats have used the Temperature Compensated Salinity Subtracted (TCSS) algorithm developed by Sakamoto et al. (2009), and this document focuses on that method. It is likely that there will be further algorithm development and it is necessary that the data systems clearly identify the algorithm that is used. It is also desirable that the data system allow for recalculation of prior data sets using new algorithms. To accomplish this, the float must report not just the computed nitrate, but the observed light intensity. Then, the rule to obtain only one NITRATE parameter is, if the spectrum is present then, the NITRATE should be recalculated from the spectrum while the computation of nitrate concentration can also generate useful diagnostics of data quality.
Resumo:
The CATARINA Leg1 cruise was carried out from June 22 to July 24 2012 on board the B/O Sarmiento de Gamboa, under the scientific supervision of Aida Rios (CSIC-IIM). It included the occurrence of the OVIDE hydrological section that was performed in June 2002, 2004, 2006, 2008 and 2010, as part of the CLIVAR program (name A25) ), and under the supervision of Herlé Mercier (CNRSLPO). This section begins near Lisbon (Portugal), runs through the West European Basin and the Iceland Basin, crosses the Reykjanes Ridge (300 miles north of Charlie-Gibbs Fracture Zone, and ends at Cape Hoppe (southeast tip of Greenland). The objective of this repeated hydrological section is to monitor the variability of water mass properties and main current transports in the basin, complementing the international observation array relevant for climate studies. In addition, the Labrador Sea was partly sampled (stations 101-108) between Greenland and Newfoundland, but heavy weather conditions prevented the achievement of the section south of 53°40’N. The quality of CTD data is essential to reach the first objective of the CATARINA project, i.e. to quantify the Meridional Overturning Circulation and water mass ventilation changes and their effect on the changes in the anthropogenic carbon ocean uptake and storage capacity. The CATARINA project was mainly funded by the Spanish Ministry of Sciences and Innovation and co-funded by the Fondo Europeo de Desarrollo Regional. The hydrological OVIDE section includes 95 surface-bottom stations from coast to coast, collecting profiles of temperature, salinity, oxygen and currents, spaced by 2 to 25 Nm depending on the steepness of the topography. The position of the stations closely follows that of OVIDE 2002. In addition, 8 stations were carried out in the Labrador Sea. From the 24 bottles closed at various depth at each stations, samples of sea water are used for salinity and oxygen calibration, and for measurements of biogeochemical components that are not reported here. The data were acquired with a Seabird CTD (SBE911+) and an SBE43 for the dissolved oxygen, belonging to the Spanish UTM group. The software SBE data processing was used after decoding and cleaning the raw data. Then, the LPO matlab toolbox was used to calibrate and bin the data as it was done for the previous OVIDE cruises, using on the one hand pre and post-cruise calibration results for the pressure and temperature sensors (done at Ifremer) and on the other hand the water samples of the 24 bottles of the rosette at each station for the salinity and dissolved oxygen data. A final accuracy of 0.002°C, 0.002 psu and 0.04 ml/l (2.3 umol/kg) was obtained on final profiles of temperature, salinity and dissolved oxygen, compatible with international requirements issued from the WOCE program.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Química, 2016.
Resumo:
vulnerabilidad a deslizamientos ubicado en el Cerro Tamuga del cantón Paute, provincia del Azuay, la metodología empleada consiste en utilizar la técnica DGPS (Differential Global Positioning System), la misma que incluye el uso simultaneo de dos o más receptores, el método de medida empleado para las observaciones DGPS es el estático rápido con un tiempo de medida de diez minutos para cada hito, los resultados fueron comparados con mediciones realizadas con estación total, para lo que se aplicó el método de medida y cálculo de triangulación; que consiste en observar desde dos bases diferentes al mismo hito para realizar la triangulación y procesamiento de los datos. Durante la etapa de muestreo se realizó 20 campañas de medición con técnicas DGPS, monitoreando un total de 14 hitos, con técnicas convencionales (Topográficas) se realizó 7 campañas y se monitoreó 14 hitos. De estos datos se obtiene la diferencia entre la última y la primera medición tanto para valores de X, Y y Z, y por tanto se obtiene la variación de precisión para los dos métodos de medición (DGPS y Estación Total). Con los resultados (∆X, ∆Y, ∆Z) se realiza el análisis de la direccionalidad de los vectores de desplazamiento mediante la diferencia entre el promedio de todas las mediciones con el primer punto medido. Los resultados DGPS presentan menor variabilidad de los datos, por lo que se sugiere emplear esta técnica en la medición de desplazamiento en extensiones grandes. Con relación al caso de estudio del Cerro Tamuga, se determinó que mediante las mediciones con DGPS, éste no presenta movimientos, pero se deben continuar las campañas de monitoreo para analizar la situación a largo plazo.
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
In the current world geospatial information is being demanded in almost real time, which requires the speed at which this data is processed and made available to the user to be at an all-time high. In order to keep up with this ever increasing speed, analysts must find ways to increase their productivity. At the same time the demand for new analysts is high, and current methods of training are long and can be costly. Through the use of human computer interactions and basic networking systems, this paper explores new ways to increase efficiency in data processing and analyst training.
Resumo:
A purpose of this research study was to demonstrate the practical linguistic study and evaluation of dissertations by using two examples of the latest technology, the microcomputer and optical scanner. That involved developing efficient methods for data entry plus creating computer algorithms appropriate for personal, linguistic studies. The goal was to develop a prototype investigation which demonstrated practical solutions for maximizing the linguistic potential of the dissertation data base. The mode of text entry was from a Dest PC Scan 1000 Optical Scanner. The function of the optical scanner was to copy the complete stack of educational dissertations from the Florida Atlantic University Library into an I.B.M. XT microcomputer. The optical scanner demonstrated its practical value by copying 15,900 pages of dissertation text directly into the microcomputer. A total of 199 dissertations or 72% of the entire stack of education dissertations (277) were successfully copied into the microcomputer's word processor where each dissertation was analyzed for a variety of syntax frequencies. The results of the study demonstrated the practical use of the optical scanner for data entry, the microcomputer for data and statistical analysis, and the availability of the college library as a natural setting for text studies. A supplemental benefit was the establishment of a computerized dissertation corpus which could be used for future research and study. The final step was to build a linguistic model of the differences in dissertation writing styles by creating 7 factors from 55 dependent variables through principal components factor analysis. The 7 factors (textual components) were then named and described on a hypothetical construct defined as a continuum from a conversational, interactional style to a formal, academic writing style. The 7 factors were then grouped through discriminant analysis to create discriminant functions for each of the 7 independent variables. The results indicated that a conversational, interactional writing style was associated with more recent dissertations (1972-1987), an increase in author's age, females, and the department of Curriculum and Instruction. A formal, academic writing style was associated with older dissertations (1972-1987), younger authors, males, and the department of Administration and Supervision. It was concluded that there were no significant differences in writing style due to subject matter (community college studies) compared to other subject matter. It was also concluded that there were no significant differences in writing style due to the location of dissertation origin (Florida Atlantic University, University of Central Florida, Florida International University).
Resumo:
This study aims to characterize the National Long-Term Care Network (NL-TCN) users. The Portuguese National Health Service, was restructured in 2006 with the creation of the National Long-Term Care Network to respond to new health and social needs concerning the continuity of care. Objectives- Analyse the sociodemographic profile of the network users and the review of hospital, local and regional management procedures. Methods-we used various methods of observational or experimental nature (data processing and presentation of results with the program Statistical Package for Social Sciences, version 20, descriptive statistics (frequencies, crosstabs and test chi-square)). The Pearson correlation test showed a positive correlation between time procedures at the local and regional management and hospital’s length of stay. Results- from a sample of 805 cases, 595 (74%) were admitted in the NL-TCN, a rate lower than the national average (86%). Almost half of the sample was admitted in Rehabilitation Units (46%), while nationally the highest number of admissions was in Home Care Teams (30%). The average time from hospital referral to network admission was 9.73 days with a positive correlation between referred network management procedures and hospital length of stay. Conclusions- For specialized units, the maximum waiting times were for the Long-Term and Support Units (mean 30.27 days) and the minimum waiting times were for Home Care Teams (mean 5.57 days). The average time between the local and regional management was 3.59 days. Almost 90% of referrals were orthopaedics, internal medicine and neurology and Network users were mostly elderly (average 75 years old), female and married. Most users were admitted to inpatient units (78%) and only 15% remained in their home town.
Resumo:
Background: Alterations in intestinal microbiota have been correlated with a growing number of diseases. Investigating the faecal microbiota is widely used as a non-invasive and ethically simple proxy for intestinal biopsies. There is an urgent need for collection and transport media that would allow faecal sampling at distance from the processing laboratory, obviating the need for same-day DNA extraction recommended by previous studies of freezing and processing methods for stool. We compared the faecal bacterial DNA quality and apparent phylogenetic composition derived using a commercial kit for stool storage and transport (DNA Genotek OMNIgene GUT) with that of freshly extracted samples, 22 from infants and 20 from older adults. Results: Use of the storage vials increased the quality of extracted bacterial DNA by reduction of DNA shearing. When infant and elderly datasets were examined separately, no differences in microbiota composition were observed due to storage. When the two datasets were combined, there was a difference according to a Wilcoxon test in the relative proportions of Faecalibacterium, Sporobacter, Clostridium XVIII, and Clostridium XlVa after 1 week's storage compared to immediately extracted samples. After 2 weeks' storage, Bacteroides abundance was also significantly different, showing an apparent increase from week 1 to week 2. The microbiota composition of infant samples was more affected than that of elderly samples by storage, with significantly higher Spearman distances between paired freshly extracted and stored samples (p
Resumo:
Liquid chromatography coupled with mass spectrometry is one of the most powerful tools in the toxicologist’s arsenal to detect a wide variety of compounds from many different matrices. However, the huge number of potentially abused substances and new substances especially designed as intoxicants poses a problem in a forensic toxicology setting. Most methods are targeted and designed to cover a very specific drug or group of drugs while many other substances remain undetected. High resolution mass spectrometry, more specifically time-of-flight mass spectrometry, represents an extremely powerful tool in analysing a multitude of compounds not only simultaneously but also retroactively. The data obtained through the time-of-flight instrument contains all compounds made available from sample extraction and chromatography, which can be processed at a later time with an improved library to detect previously unrecognised compounds without having to analyse the respective sample again. The aim of this project was to determine the utility and limitations of time-of-flight mass spectrometry as a general and easily expandable screening method. The resolution of time-of-flight mass spectrometry allows for the separation of compounds with the same nominal mass but distinct exact masses without the need to separate them chromatographically. To simulate the wide variety of potentially encountered drugs in such a general screening method, seven drugs (morphine, cocaine, zolpidem, diazepam, amphetamine, MDEA and THC) were chosen to represent this variety in terms of mass, properties and functional groups. Consequently, several liquid-liquid and solid phase extractions were applied to urine samples to determine the most general suitable and unspecific extraction. Chromatography was optimised by investigating the parameters pH, concentration, organic solvent and gradient of the mobile phase to improve data obtained by the time-of-flight instrument. The resulting method was validated as a qualitative confirmation/identification method. Data processing was automated using the software TargetAnalysis, which provides excellent analyte recognition according to retention time, exact mass and isotope pattern. The recognition of isotope patterns allows excellent recognition of analytes even in interference rich mass spectra and proved to be a good positive indicator. Finally, the validated method was applied to samples received from the A& E Department of Glasgow Royal Infirmary in suspected drug abuse cases and samples received from the Scottish Prison Service, which we received from their own prevalence study targeting drugs of abuse in the prison population. The obtained data was processed with a library established in the course of this work.
Resumo:
La actividad física regular desempeña un papel fundamental en la prevención y control de los desórdenes musculo esqueléticos, dentro de la actividad laboral del profesor de educación física. Objetivo: El propósito del estudio fue determinar la relación entre los niveles de actividad física y la prevalencia de los desórdenes musculo esqueléticos, en profesores de educación física de 42 instituciones educativas oficiales de Bogotá-Colombia. Métodos. Se trata de un estudio de corte transversal en 262 profesores de educación física, de 42 instituciones educativas oficiales de Bogotá - Colombia. Se aplicó de manera auto-diligenciada el Cuestionario Nórdico de desórdenes músculos esqueléticos y el Cuestionario IPAQ versión corta para identificar los niveles de actividad física. Se obtuvieron medidas de tendencia central y de dispersión para variables cuantitativas y frecuencias relativas para variables cualitativas. Se calculó la prevalencia de vida y el porcentaje de reubicación laboral en los docentes que habían padecido diferentes tipo de dolor. Para estimar la relación entre el dolor y las variables sociodemográficas de los docentes, se utilizó un modelo de regresión logística binaria simple. Los análisis fueron realizados en SPSS versión 20 y se consideró como significativo un valor p < 0.05 para el contraste de hipótesis y un nivel de confianza para la estimación de parámetros. Resultados: El porcentaje de respuesta fue del 83.9%, se consideraron válidos 262 registros, 22.5% eran de género femenino, la mayor cantidad de docentes de educación física se encuentraon entre 25 y 35 años (43,9%), frente a los desórdenes musculo esqueléticos, el 16.9% de los profesores reporto haberlos sufrido alguna vez molestias en el cuello, el 17,2% en el hombro, 27,9% espalda, 7.93% brazo y en mano el 8.4%. Los profesores con mayores niveles de actividad física, reportaron una prevalencia menor de alteraciones musculo esqueléticas de 16,9 % para cuello; 27.7% para dorsal/lumbar frente a los sujetos con niveles bajos de actividad física. La presencia de los desórdenes se asoció a los años de experiencia (OR 3.39 IC95% 1.41-7.65), a pertenecer al género femenino (OR 4.94 IC95% 1.94-12.59), a la edad (OR 5.06 IC95% 1.25-20.59), y al atender más de 400 estudiantes a cargo dentro de la jornada laboral (OR 4.50 IC95% 1.74-11.62). Conclusiones: En los profesores de Educación Física no sé encontró una relación estadísticamente significativa entre los niveles de actividad física y los desórdenes musculo esqueléticos medidos por auto reporte.
Resumo:
This paper presents the study and experimental tests for the viability analysis of using multiple wireless technologies in urban traffic light controllers in a Smart City environment. Communication drivers, different types of antennas, data acquisition methods and data processing for monitoring the network are presented. The sensors and actuators modules are connected in a local area network through two distinct low power wireless networks using both 868 MHz and 2.4 GHz frequency bands. All data communications using 868 MHz go through a Moteino. Various tests are made to assess the most advantageous features of each communication type. The experimental results show better range for 868 MHz solutions, whereas the 2.4 GHz presents the advantage of self-regenerating the network and mesh. The different pros and cons of both communication methods are presented.
Resumo:
En este artículo se presenta la aplicación del método para calcular la tasa de renta y el espacio de la tierra. Esta aplicación contempla los métodos, las técnicas, el levantamiento de la información, la construcción de una base de datos, los sistemas de clasificación y la aplicación práctica del método de ingresos con el objeto de calcular los precios de la tierra y la tasa de valorización del capital bajo la forma de arrendamiento.SUMMARYIn this article it is presented the application of the method used to calculate the income rate and the price of land.This application contemplates the methods, techniques, the information drawing up, the construction of a data base, the procedures, the data processing and the empirical application of the income method in order to calculate the price of land and the income method in order to calculate the price of land and the capital under the renting form.