19 resultados para Mean square error methods

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is combined with the potential of the technique of near infrared spectroscopy - NIR and chemometrics order to determine the content of diclofenac tablets, without destruction of the sample, to which was used as the reference method, ultraviolet spectroscopy, which is one of the official methods. In the construction of multivariate calibration models has been studied several types of pre-processing of NIR spectral data, such as scatter correction, first derivative. The regression method used in the construction of calibration models is the PLS (partial least squares) using NIR spectroscopic data of a set of 90 tablets were divided into two sets (calibration and prediction). 54 were used in the calibration samples and the prediction was used 36, since the calibration method used was crossvalidation method (full cross-validation) that eliminates the need for a validation set. The evaluation of the models was done by observing the values of correlation coefficient R 2 and RMSEC mean square error (calibration error) and RMSEP (forecast error). As the forecast values estimated for the remaining 36 samples, which the results were consistent with the values obtained by UV spectroscopy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the quantitative analysis of glucose, triglycerides and cholesterol (total and HDL) in both rat and human blood plasma was performed without any kind of pretreatment of samples, by using near infrared spectroscopy (NIR) combined with multivariate methods. For this purpose, different techniques and algorithms used to pre-process data, to select variables and to build multivariate regression models were compared between each other, such as partial least squares regression (PLS), non linear regression by artificial neural networks, interval partial least squares regression (iPLS), genetic algorithm (GA), successive projections algorithm (SPA), amongst others. Related to the determinations of rat blood plasma samples, the variables selection algorithms showed satisfactory results both for the correlation coefficients (R²) and for the values of root mean square error of prediction (RMSEP) for the three analytes, especially for triglycerides and cholesterol-HDL. The RMSEP values for glucose, triglycerides and cholesterol-HDL obtained through the best PLS model were 6.08, 16.07 e 2.03 mg dL-1, respectively. In the other case, for the determinations in human blood plasma, the predictions obtained by the PLS models provided unsatisfactory results with non linear tendency and presence of bias. Then, the ANN regression was applied as an alternative to PLS, considering its ability of modeling data from non linear systems. The root mean square error of monitoring (RMSEM) for glucose, triglycerides and total cholesterol, for the best ANN models, were 13.20, 10.31 e 12.35 mg dL-1, respectively. Statistical tests (F and t) suggest that NIR spectroscopy combined with multivariate regression methods (PLS and ANN) are capable to quantify the analytes (glucose, triglycerides and cholesterol) even when they are present in highly complex biological fluids, such as blood plasma

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In last decades, neural networks have been established as a major tool for the identification of nonlinear systems. Among the various types of networks used in identification, one that can be highlighted is the wavelet neural network (WNN). This network combines the characteristics of wavelet multiresolution theory with learning ability and generalization of neural networks usually, providing more accurate models than those ones obtained by traditional networks. An extension of WNN networks is to combine the neuro-fuzzy ANFIS (Adaptive Network Based Fuzzy Inference System) structure with wavelets, leading to generate the Fuzzy Wavelet Neural Network - FWNN structure. This network is very similar to ANFIS networks, with the difference that traditional polynomials present in consequent of this network are replaced by WNN networks. This paper proposes the identification of nonlinear dynamical systems from a network FWNN modified. In the proposed structure, functions only wavelets are used in the consequent. Thus, it is possible to obtain a simplification of the structure, reducing the number of adjustable parameters of the network. To evaluate the performance of network FWNN with this modification, an analysis of network performance is made, verifying advantages, disadvantages and cost effectiveness when compared to other existing FWNN structures in literature. The evaluations are carried out via the identification of two simulated systems traditionally found in the literature and a real nonlinear system, consisting of a nonlinear multi section tank. Finally, the network is used to infer values of temperature and humidity inside of a neonatal incubator. The execution of such analyzes is based on various criteria, like: mean squared error, number of training epochs, number of adjustable parameters, the variation of the mean square error, among others. The results found show the generalization ability of the modified structure, despite the simplification performed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to evaluate the potential of near-infrared reflectance spectroscopy (NIRS) as a rapid and non-destructive method to determine the soluble solid content (SSC), pH and titratable acidity of intact plums. Samples of plum with a total solids content ranging from 5.7 to 15%, pH from 2.72 to 3.84 and titratable acidity from 0.88 a 3.6% were collected from supermarkets in Natal-Brazil, and NIR spectra were acquired in the 714 2500 nm range. A comparison of several multivariate calibration techniques with respect to several pre-processing data and variable selection algorithms, such as interval Partial Least Squares (iPLS), genetic algorithm (GA), successive projections algorithm (SPA) and ordered predictors selection (OPS), was performed. Validation models for SSC, pH and titratable acidity had a coefficient of correlation (R) of 0.95 0.90 and 0.80, as well as a root mean square error of prediction (RMSEP) of 0.45ºBrix, 0.07 and 0.40%, respectively. From these results, it can be concluded that NIR spectroscopy can be used as a non-destructive alternative for measuring the SSC, pH and titratable acidity in plums

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase in ultraviolet radiation (UV) at surface, the high incidence of non-melanoma skin cancer (NMSC) in coast of Northeast of Brazil (NEB) and reduction of total ozone were the motivation for the present study. The overall objective was to identify and understand the variability of UV or Index Ultraviolet Radiation (UV Index) in the capitals of the east coast of the NEB and adjust stochastic models to time series of UV index aiming make predictions (interpolations) and forecasts / projections (extrapolations) followed by trend analysis. The methodology consisted of applying multivariate analysis (principal component analysis and cluster analysis), Predictive Mean Matching method for filling gaps in the data, autoregressive distributed lag (ADL) and Mann-Kendal. The modeling via the ADL consisted of parameter estimation, diagnostics, residuals analysis and evaluation of the quality of the predictions and forecasts via mean squared error and Pearson correlation coefficient. The research results indicated that the annual variability of UV in the capital of Rio Grande do Norte (Natal) has a feature in the months of September and October that consisting of a stabilization / reduction of UV index because of the greater annual concentration total ozone. The increased amount of aerosol during this period contributes in lesser intensity for this event. The increased amount of aerosol during this period contributes in lesser intensity for this event. The application of cluster analysis on the east coast of the NEB showed that this event also occurs in the capitals of Paraiba (João Pessoa) and Pernambuco (Recife). Extreme events of UV in NEB were analyzed from the city of Natal and were associated with absence of cloud cover and levels below the annual average of total ozone and did not occurring in the entire region because of the uneven spatial distribution of these variables. The ADL (4, 1) model, adjusted with data of the UV index and total ozone to period 2001-2012 made a the projection / extrapolation for the next 30 years (2013-2043) indicating in end of that period an increase to the UV index of one unit (approximately), case total ozone maintain the downward trend observed in study period

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Muscle fatigue is a phenomenon that promotes physiological and biomechanical disorders and their changes in healthy subjects have been widely studied and have significant importance for care in preventing injuries, but we do not have many information about its effects in patients after ACL reconstruction. Thus, this study is to analyze the effects of fatigue on neuromuscular behavior of quadriceps after ACL reconstruction. To reach this objective, participants were forty men, twenty healthy (26,90 ± 6,29 years) and twenty after ACL reconstruction (29,75 ± 7,01 years) with a graft of semitendinosus and gracilis tendons, between four to six months after surgery. At first, there was an assessment of joint position sense (JPS) at the isokinetic dynamometer at a speed of 5°/s and target angle of 45° to analyze the absolute error of JPS. Next, we applied the a muscle fatigue protocol, running 100 repetitions of isokinetic knee flexion-extension at 90°/s. Concurrently with this protocol, there was the assessment of muscle performance, as the peak torque (PT) and fatigue index, and electromyographic activity (RMS and median frequency). Finally, we repeated the assessment of JPS. The statistical analysis showed that patients after ACL reconstruction have, even under normal conditions, the amended JPS compared with healthy subjects and that after fatigue, both have disturbances in the JPS, but this alteration is significantly exacerbated in patients after ACL reconstruction. About muscle performance, we could notice that these patients have a lower PT, although there are no differences between the dynamometric and EMG fatigue index. These findings show the necessity about the cares of pacients with ACL reconstruction in respect of the risks of articulate instability and overload in ligamentar graft

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents an analysis of the behavior of some algorithms usually available in stereo correspondence literature, with full HD images (1920x1080 pixels) to establish, within the precision dilemma versus runtime applications which these methods can be better used. The images are obtained by a system composed of a stereo camera coupled to a computer via a capture board. The OpenCV library is used for computer vision operations and processing images involved. The algorithms discussed are an overall method of search for matching blocks with the Sum of the Absolute Value of the difference (Sum of Absolute Differences - SAD), a global technique based on cutting energy graph cuts, and a so-called matching technique semi -global. The criteria for analysis are processing time, the consumption of heap memory and the mean absolute error of disparity maps generated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decades the study of integer-valued time series has gained notoriety due to its broad applicability (modeling the number of car accidents in a given highway, or the number of people infected by a virus are two examples). One of the main interests of this area of study is to make forecasts, and for this reason it is very important to propose methods to make such forecasts, which consist of nonnegative integer values, due to the discrete nature of the data. In this work, we focus on the study and proposal of forecasts one, two and h steps ahead for integer-valued second-order autoregressive conditional heteroskedasticity processes [INARCH (2)], and in determining some theoretical properties of this model, such as the ordinary moments of its marginal distribution and the asymptotic distribution of its conditional least squares estimators. In addition, we study, via Monte Carlo simulation, the behavior of the estimators for the parameters of INARCH(2) processes obtained using three di erent methods (Yule- Walker, conditional least squares, and conditional maximum likelihood), in terms of mean squared error, mean absolute error and bias. We present some forecast proposals for INARCH(2) processes, which are compared again via Monte Carlo simulation. As an application of this proposed theory, we model a dataset related to the number of live male births of mothers living at Riachuelo city, in the state of Rio Grande do Norte, Brazil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decades the study of integer-valued time series has gained notoriety due to its broad applicability (modeling the number of car accidents in a given highway, or the number of people infected by a virus are two examples). One of the main interests of this area of study is to make forecasts, and for this reason it is very important to propose methods to make such forecasts, which consist of nonnegative integer values, due to the discrete nature of the data. In this work, we focus on the study and proposal of forecasts one, two and h steps ahead for integer-valued second-order autoregressive conditional heteroskedasticity processes [INARCH (2)], and in determining some theoretical properties of this model, such as the ordinary moments of its marginal distribution and the asymptotic distribution of its conditional least squares estimators. In addition, we study, via Monte Carlo simulation, the behavior of the estimators for the parameters of INARCH(2) processes obtained using three di erent methods (Yule- Walker, conditional least squares, and conditional maximum likelihood), in terms of mean squared error, mean absolute error and bias. We present some forecast proposals for INARCH(2) processes, which are compared again via Monte Carlo simulation. As an application of this proposed theory, we model a dataset related to the number of live male births of mothers living at Riachuelo city, in the state of Rio Grande do Norte, Brazil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The inspiratory muscle training (IMT) has been considered an option in reversing or preventing decrease in respiratory muscle strength, however, little is known about the adaptations of these muscles arising from the training with charge. Objectives: To investigate the effect of IMT on the diaphragmatic muscle strength and function neural and structural adjustment of diaphragm in sedentary young people, compare the effects of low intensity IMT with moderate intensity IMT on the thickness, mobility and electrical activity of diaphragm and in inspiratory muscles strength and establish a protocol for conducting a systematic review to evaluate the effects of respiratory muscle training in children and adults with neuromuscular diseases. Materials and Methods: A randomized, double-blind, parallel-group, controlled trial, sample of 28 healthy, both sexes, and sedentary young people, divided into two groups: 14 in the low load training group (G10%) and 14 in the moderate load training group (G55%). The volunteers performed for 9 weeks a home IMT protocol with POWERbreathe®. The G55% trained with 55% of maximal inspiratory pressure (MIP) and the G10% used a charge of 10% of MIP. The training was conducted in sessions of 30 repetitions, twice a day, six days per week. Every two weeks was evaluated MIP and adjusted the load. Volunteers were submitted by ultrasound, surface electromyography, spirometry and manometer before and after IMT. Data were analyzed by SPSS 20.0. Were performed Student's t-test for paired samples to compare diaphragmatic thickness, MIP and MEP before and after IMT protocol and Wilcoxon to compare the RMS (root mean square) and median frequency (MedF) values also before and after training protocol. They were then performed the Student t test for independent samples to compare mobility and diaphragm thickness, MIP and MEP between two groups and the Mann-Whitney test to compare the RMS and MedF values also between the two groups. Parallel to experimental study, we developed a protocol with support from the Cochrane Collaboration on IMT in people with neuromuscular diseases. Results: There was, in both groups, increased inspiratory muscle strength (P <0.05) and expiratory in G10% (P = 0.009) increase in RMS and thickness of relaxed muscle in G55% (P = 0.005; P = 0.026) and there was no change in the MedF (P> 0.05). The comparison between two groups showed a difference in RMS (P = 0.04) and no difference in diaphragm thickness and diaphragm mobility and respiratory muscle strength. Conclusions: It was identified increased neural activity and diagrammatic structure with consequent increase in respiratory muscle strength after the IMT with moderate load. IMT with load of 10% of MIP cannot be considered as a placebo dose, it increases the inspiratory muscle strength and IMT with moderate intensity is able to enhance the recruitment of muscle fibers of diaphragm and promote their hypertrophy. The protocol for carrying out the systematic review published in The Cochrane Library.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study is to create an artificial neural network (ANN) capable of modeling the transverse elasticity modulus (E2) of unidirectional composites. To that end, we used a dataset divided into two parts, one for training and the other for ANN testing. Three types of architectures from different networks were developed, one with only two inputs, one with three inputs and the third with mixed architecture combining an ANN with a model developed by Halpin-Tsai. After algorithm training, the results demonstrate that the use of ANNs is quite promising, given that when they were compared with those of the Halpín-Tsai mathematical model, higher correlation coefficient values and lower root mean square values were observed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the current major concerns in engineering is the development of aircrafts that have low power consumption and high performance. So, airfoils that have a high value of Lift Coefficient and a low value for the Drag Coefficient, generating a High-Efficiency airfoil are studied and designed. When the value of the Efficiency increases, the aircraft s fuel consumption decreases, thus improving its performance. Therefore, this work aims to develop a tool for designing of airfoils from desired characteristics, as Lift and Drag coefficients and the maximum Efficiency, using an algorithm based on an Artificial Neural Network (ANN). For this, it was initially collected an aerodynamic characteristics database, with a total of 300 airfoils, from the software XFoil. Then, through the software MATLAB, several network architectures were trained, between modular and hierarchical, using the Back-propagation algorithm and the Momentum rule. For data analysis, was used the technique of cross- validation, evaluating the network that has the lowest value of Root Mean Square (RMS). In this case, the best result was obtained for a hierarchical architecture with two modules and one layer of hidden neurons. The airfoils developed for that network, in the regions of lower RMS, were compared with the same airfoils imported into the software XFoil

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present study was to extract vegetable oil from brown linseed (Linum usitatissimum L.), determine fatty acid levels, the antioxidant capacity of the extracted oil and perform a rapid economic assessment of the SFE process in the manufacture of oil. The experiments were conducted in a test bench extractor capable of operating with carbon dioxide and co-solvents, obeying 23 factorial planning with central point in triplicate, and having process yield as response variable and pressure, temperature and percentage of cosolvent as independent variables. The yield (mass of extracted oil/mass of raw material used) ranged from 2.2% to 28.8%, with the best results obtained at 250 bar and 50ºC, using 5% (v/v) ethanol co-solvent. The influence of the variables on extraction kinetics and on the composition of the linseed oil obtained was investigated. The extraction kinetic curves obtained were based on different mathematical models available in the literature. The Martínez et al. (2003) model and the Simple Single Plate (SSP) model discussed by Gaspar et al. (2003) represented the experimental data with the lowest mean square errors (MSE). A manufacturing cost of US$17.85/kgoil was estimated for the production of linseed oil using TECANALYSIS software and the Rosa and Meireles method (2005). To establish comparisons with SFE, conventional extraction tests were conducted with a Soxhlet device using petroleum ether. These tests obtained mean yields of 35.2% for an extraction time of 5h. All the oil samples were sterilized and characterized in terms of their composition in fatty acids (FA) using gas chromatography. The main fatty acids detected were: palmitic (C16:0), stearic (C18:0), oleic (C18:1), linoleic (C18:2n-6) and α-linolenic (C18:3n-3). The FA contents obtained with Soxhlet dif ered from those obtained with SFE, with higher percentages of saturated and monounsaturated FA with the Soxhlet technique using petroleum ether. With respect to α-linolenic content (main component of linseed oil) in the samples, SFE performed better than Soxhlet extraction, obtaining percentages between 51.18% and 52.71%, whereas with Soxhlet extraction it was 47.84%. The antioxidant activity of the oil was assessed in the β-carotene/linoleic acid system. The percentages of inhibition of the oxidative process reached 22.11% for the SFE oil, but only 6.09% for commercial oil (cold pressing), suggesting that the SFE technique better preserves the phenolic compounds present in the seed, which are likely responsible for the antioxidant nature of the oil. In vitro tests with the sample displaying the best antioxidant response were conducted in rat liver homogenate to investigate the inhibition of spontaneous lipid peroxidation or autooxidation of biological tissue. Linseed oil proved to be more efficient than fish oil (used as standard) in decreasing lipid peroxidation in the liver tissue of Wistar rats, yielding similar results to those obtained with the use of BHT (synthetic antioxidant). Inhibitory capacity may be explained by the presence of phenolic compounds with antioxidant activity in the linseed oil. The results obtained indicate the need for more detailed studies, given the importance of linseed oil as one of the greatest sources of ω3 among vegetable oils