48 resultados para Prediction error method

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los objetivos de esta tesis fueron 1) obtener y validar ecuaciones de predicción para determinar in vivo la composición corporal y de la canal de conejos en crecimiento de 25 a 77 días de vida utilizando la técnica de la Impedancia Bioeléctrica (BIA), y 2) evaluar su aplicación para determinar diferencias en la composición corporal y de la canal, así como la retención de nutrientes de animales alimentados con diferentes fuentes y niveles de grasa. El primer estudio se realizó para determinar y después validar, usando datos independientes, las ecuaciones de predicción obtenidas para determinar in vivo la composición corporal de los conejos en crecimiento. Se utilizaron 150 conejos a 5 edades distintas (25, 35, 49, 63 y 77 días de vida), con un rango de pesos entre 231 y 3138 g. Para determinar los valores de resistencia (Rs,) and reactancia (Xc,) se usó un terminal (Model BIA-101, RJL Systems, Detroit, MI USA) con cuatro electrodos. Igualmente se registró la distancia entre electrodos internos (D), la longitud corporal (L) y el peso vivo (PV) de cada animal. En cada edad, los animales fueron molidos y congelados (-20 ºC) para su posterior análisis químico (MS, grasa, proteína, cenizas y EB). El contenido en grasa y energía de los animales se incrementó, mientras que los contenidos en proteína, cenizas y agua de los animales disminuyeron con la edad. Los valores medios de Rs, Xc, impedancia (Z), L y D fueron 83.5 ± 23.1 , 18.2 ± 3.8 , 85.6 ± 22.9 , 30.6 ± 6.9 cm y 10.8 ± 3.1 cm. Se realizó un análisis de regresión lineal múltiple para determinar las ecuaciones de predicción, utilizando los valores de PV, L and Z como variables independientes. Las ecuaciones obtenidas para estimar los contenidos en agua (g), PB (g), grasa (g), cenizas (g) and EB (MJ) tuvieron un coeficiente de determinación de (R2) de 0.99, 0.99, 0.97, 0.98 y 0.99, y los errores medios de predicción relativos (EMPR) fueron: 2.79, 6.15, 24.3, 15.2 y 10.6%, respectivamente. Cuando el contenido en agua se expresó como porcentaje, los valores de R2 y EMPR fueron 0.85 and 2.30%, respectivamente. Al predecir los contenidos en proteína (%MS), grasa (%MS), cenizas (%MS) y energía (kJ/100 g MS), se obtuvieron valores de 0.79, 0.83, 0.71 y 0.86 para R2, y 5.04, 18.9, 12.0 y 3.19% para EMPR. La reactancia estuvo negativamente correlacionada con el contenido en agua, cenizas y PB (r = -0.32, P < 0.0001; r = -0.20, P < 0.05; r = -0.26, P < 0.01) y positivamente correlacionada con la grasa y la energía (r = 0.23 y r = 0.24; P < 0.01). Sin embargo, Rs estuvo positivamente correlacionada con el agua, las cenizas y la PB (r = 0.31, P < 0.001; r = 0.28, P < 0.001; r = 0.37, P < 0.0001) y negativamente con la grasa y la energía (r = -0.36 y r = -0.35; P < 0.0001). Igualmente la edad estuvo negativamente correlacionada con el contenido en agua, cenizas y proteína (r = -0.79; r = -0.68 y r = -0.80; P < 0.0001) y positivamente con la grasa y la energía (r = 0.78 y r = 0.81; P < 0.0001). Se puede concluir que el método BIA es una técnica buena y no invasiva para estimar in vivo la composición corporal de conejos en crecimiento de 25 a 77 días de vida. El objetivo del segundo estudio fue determinar y validar con datos independientes las ecuaciones de predicción obtenidas para estimar in vivo la composición de la canal eviscerada mediante el uso de BIA en un grupo de conejos de 25 a 77 días, así como testar su aplicación para predecir la retención de nutrientes y calcular las eficacias de retención de la energía y del nitrógeno. Se utilizaron 75 conejos agrupados en 5 edades (25, 35, 49, 63 y 77 días de vida) con unos pesos que variaron entre 196 y 3260 g. Para determinar los valores de resistencia (Rs, ) y reactancia (Xc, ) se usó un terminal (Model BIA-101, RJL Systems, Detroit, MI USA) con cuatro electrodos. Igualmente se registró la distancia entre electrodos internos (D), la longitud corporal (L) y el peso vivo (PV) del cada animal. En cada edad, los animales fueron aturdidos y desangrados. Su piel, vísceras y contenido digestivo fueron retirados, y la canal oreada fue pesada y molida para posteriores análisis (MS, grasa, PB, cenizas y EB). Los contenidos en energía y grasa aumentaron mientras que los de agua, cenizas y proteína disminuyeron con la edad. Los valores medios de Rs, Xc, impedancia (Z), L y D fueron 95.9±23.9 , 19.5±4.7 , 98.0±23.8 , 20.6±6.3 cm y 13.7±3.1 cm. Se realizó un análisis de regresión linear múltiple para determinar las ecuaciones de predicción, utilizando los valores de PV, L and Z como variables independientes. Los coeficientes de determinación (R2) de las ecuaciones obtenidas para estimar los contenidos en agua (g), PB (g), grasa (g), cenizas (g) and EB (MJ) fueron: 0.99, 0.99, 0.95, 0.96 y 0.98, mientras que los errores medios de predicción relativos (EMPR) fueron: 4.20, 5.48, 21.9, 9.10 y 6.77%, respectivamente. Cuando el contenido en agua se expresó como porcentaje, los valores de R2 y EMPR fueron 0.79 y 1.62%, respectivamente. Cuando se realizó la predicción de los contenidos en proteína (%MS), grasa (%MS), cenizas (%MS) y energía (kJ/100 g MS), los valores de R2 fueron 0.68, 0.76, 0.66 and 0.82, y los de RMPE: 3.22, 10.5, 5.82 and 2.54%, respectivamente. La reactancia estuvo directamente correlacionada con el contenido en grasa (r = 0.24, P < 0.05), mientras que la resistencia guardó una correlación positiva con los contenidos en agua, cenizas y proteína (r = 0.55, P < 0.001; r = 0.54, P < 0.001; r = 0.40, P < 0.005) y negativa con la grasa y la energía (r = -0.44 y r = -0.55; P < 0.001). Igualmente la edad estuvo negativamente correlacionada con los contenidos en agua, cenizas y PB (r = -0.94; r = -0.85 y r = -0.75; P < 0.0001) y positivamente con la grasa y la energía (r = 0.89 y r = 0.90; P < 0.0001). Se estudió la eficacia global de retención de la energía (ERE) y del nitrógeno (ERN) durante todo el periodo de cebo (35-63 d), Los valores de ERE fueron 20.4±7.29%, 21.0±4.18% and 20.8±2.79% en los periodos 35 a 49, 49 a 63 y 35 a 63 d, respectivamente. ERN fue 46.9±11.7%, 34.5±7.32% y 39.1±3.23% para los mismos periodos. La energía fue retenida en los tejidos para crecimiento con una eficiencia del 52.5% y la eficiencia de retención de la energía como proteína y grasa fue de 33.3 y 69.9% respectivamente. La eficiencia de utilización del nitrógeno para crecimiento fue cercana al 77%. Este trabajo muestra como el método BIA es técnica buena y no invasiva para determinar in vivo la composición de la canal y la retención de nutrientes en conejos en crecimiento de 25 a 77 días de vida. En el tercer estudio, se llevaron a cabo dos experimentos con el fin de investigar los efectos del nivel de inclusión y de la fuente de grasa, sobre los rendimientos productivos, la mortalidad, la retención de nutrientes y la composición corporal total y de la canal eviscerada de conejos en crecimiento de 34 a 63 d de vida. En el Exp. 1 se formularon 3 dietas con un diseño experimental factorial 3 x 2 con el tipo de grasa utilizada: Aceite de Soja (SBO), Lecitinas de Soja (SLO) y Manteca (L) y el nivel de inclusión (1.5 y 4%) como factores principales. El Exp. 2 también fue diseñado con una estructura factorial 3 x 2, pero usando SBO, Aceite de Pescado (FO) y Aceite de Palmiste como fuentes de grasa, incluidas a los mismos niveles que en el Exp. 1. En ambos experimentos 180 animales fueron alojados en jaulas individuales (n=30) y 600 en jaulas colectivas en grupos de 5 animales (n=20). Los animales alimentados con un 4% de grasa añadida tuvieron unos consumos diarios y unos índices de conversión más bajos que aquellos alimentados con las dietas con un 1.5% de grasa. En los animales alojados en colectivo del Exp. 1, el consumo fue un 4.8% más alto en los que consumieron las dietas que contenían manteca que en los animales alimentados con las dietas SBO (P = 0.036). La inclusión de manteca tendió a reducir la mortalidad (P = 0.067) en torno al 60% y al 25% con respecto a las dietas con SBO y SLO, respectivamente. La mortalidad aumentó con el nivel máximo de inclusión de SLO (14% vs. 1%, P < 0.01), sin observarse un efecto negativo sobre la mortalidad con el nivel más alto de inclusión de las demás fuentes de grasa utilizadas. En los animales alojados colectivo del Exp. 2 se encontró una disminución del consumo (11%), peso vivo a 63 d (4.8%) y de la ganancia diaria de peso (7.8%) con la inclusión de aceite de pescado con respecto a otras dietas (P < 0.01). Los dos últimos parámetros se vieron especialmente más reducidos cuando en las dietas se incluyó el nivel más alto de FO (5.6 y 9.5%, respectivamente, (P < 0.01)). Los animales alojados individualmente mostraron unos resultados productivos muy similares. La inclusión de aceite pescado tendió (P = 0.078) a aumentar la mortalidad (13.2%) con respecto al aceite de palmiste (6.45%), siendo intermedia para las dietas que contenían SBO (8.10%). La fuente o el nivel de grasa no afectaron la composición corporal total o de la canal eviscerada de los animales. Un incremento en el nivel de grasa dio lugar a una disminución de la ingesta de nitrógeno digestible (DNi) (1.83 vs. 1.92 g/d; P = 0.068 en Exp. 1 y 1.79 vs. 1.95 g/d; P = 0.014 en Exp. 2). Debido a que el nitrógeno retenido (NR) en la canal fue similar para ambos niveles (0.68 g/d (Exp. 1) y 0.71 g/d (Exp. 2)), la eficacia total de retención del nitrógeno (ERN) aumentó con el nivel máximo de inclusión de grasa, pero de forma significativa únicamente en el Exp. 1 (34.9 vs. 37.8%; P < 0.0001), mientras que en el Exp. 2 se encontró una tendencia (36.2 vs. 38.0% en Exp. 2; P < 0.064). Como consecuencia, la excreción de nitrógeno en heces fue menor en los animales alimentados con el nivel más alto de grasa (0.782 vs. 0.868 g/d; P = 0.0001 en Exp. 1, y 0.745 vs. 0.865 g/d; P < 0.0001 en Exp.2) al igual que el nitrógeno excretado en orina (0.702 vs. 0.822 g/d; P < 0.0001 en Exp. 1 y 0.694 vs. 0.7999 g/d; P = 0.014 en Exp.2). Aunque no hubo diferencias en la eficacia total de retención de la energía (ERE), la energía excretada en heces disminuyó al aumentar el nivel de inclusión de grasa (142 vs. 156 Kcal/d; P = 0.0004 en Exp. 1 y 144 vs. 154 g/d; P = 0.050 en Exp. 2). Sin embargo, la energía excretada como orina y en forma de calor fue mayor en el los animales del Exp. 1 alimentados con el nivel más alto de grasa (216 vs. 204 Kcal/d; P < 0.017). Se puede concluir que la manteca y el aceite de palmiste pueden ser considerados como fuentes alternativas al aceite de soja debido a la reducción de la mortalidad, sin efectos negativos sobre los rendimientos productivos o la retención de nutrientes. La inclusión de aceite de pescado empeoró los rendimientos productivos y la mortalidad durante el periodo de crecimiento. Un aumento en el nivel de grasa mejoró el índice de conversión y la eficacia total de retención de nitrógeno. ABSTRACT The aim of this Thesis is: 1) to obtain and validate prediction equations to determine in vivo whole body and carcass composition using the Bioelectrical Impedance (BIA) method in growing rabbits from 25 to 77 days of age, and 2) to study its application to determine differences on whole body and carcass chemical composition, and nutrient retention of animals fed different fat levels and sources. The first study was conducted to determine and later validate, by using independent data, the prediction equations obtained to assess in vivo the whole body composition of growing rabbits. One hundred and fifty rabbits grouped at 5 different ages (25, 35, 49, 63 and 77 days) and weighing from 231 to 3138 g were used. A four terminal body composition analyser was used to obtain resistance (Rs, ) and reactance (Xc, ) values (Model BIA-101, RJL Systems, Detroit, MI USA). The distance between internal electrodes (D, cm), body length (L, cm) and live BW of each animal were also registered. At each selected age, animals were slaughtered, ground and frozen (-20 ºC) for later chemical analyses (DM, fat, CP, ash and GE). Fat and energy body content increased with the age, while protein, ash, and water decreased. Mean values of Rs, Xc, impedance (Z), L and D were 83.5 ± 23.1 , 18.2 ± 3.8 , 85.6 ± 22.9 , 30.6 ± 6.9 cm and 10.8 ± 3.1 cm. A multiple linear regression analysis was used to determine the prediction equations, using BW, L and Z data as independent variables. Equations obtained to estimate water (g), CP (g), fat (g), ash (g) and GE (MJ) content had, respectively, coefficient of determination (R2) values of 0.99, 0.99, 0.97, 0.98 and 0.99, and the relative mean prediction error (RMPE) was: 2.79, 6.15, 24.3, 15.2 and 10.6%, respectively. When water was expressed as percentage, the R2 and RMPE were 0.85 and 2.30%, respectively. When prediction of the content of protein (%DM), fat (%DM), ash (%DM) and energy (kJ/100 g DM) was done, values of 0.79, 0.83, 0.71 and 0.86 for R2, and 5.04, 18.9, 12.0 and 3.19% for RMPE, respectively, were obtained. Reactance was negatively correlated with water, ash and CP content (r = -0.32, P < 0.0001; r = -0.20, P < 0.05; r = -0.26, P < 0.01) and positively correlated with fat and GE (r = 0.23 and r = 0.24; P < 0.01). Otherwise, resistance was positively correlated with water, ash and CP (r = 0.31, P < 0.001; r = 0.28, P < 0.001; r = 0.37, P < 0.0001) and negatively correlated with fat and energy (r = -0.36 and r = -0.35; P < 0.0001). Moreover, age was negatively correlated with water, ash and CP content (r = -0.79; r = -0.68 and r = -0.80; P < 0.0001) and positively correlated with fat and energy (r = 0.78 and r = 0.81; P < 0.0001). It could be concluded that BIA is a non-invasive good method to estimate in vivo whole body composition of growing rabbits from 25 to 77 days of age. The aim of the second study was to determine and validate with independent data, the prediction equations obtained to estimate in vivo carcass composition of growing rabbits by using the results of carcass chemical composition and BIA values in a group of rabbits from 25 to 77 days. Also its potential application to predict nutrient retention and overall energy and nitrogen retention efficiencies was analysed. Seventy five rabbits grouped at 5 different ages (25, 35, 49, 63 and 77 days) with weights ranging from 196 to 3260 g were used. A four terminal body composition analyser (Model BIA-101, RJL Systems, Detroit, MI USA) was used to obtain resistance (Rs, ) and reactance (Xc, ) values. The distance between internal electrodes (D, cm), body length (L, cm) and live weight (BW, g) were also registered. At each selected age, all the animals were stunned and bled. The skin, organs and digestive content were removed, and the chilled carcass were weighed and processed for chemical analyses (DM, fat, CP, ash and GE). Energy and fat increased with the age, while CP, ash, and water decreased. Mean values of Rs, Xc, impedance (Z), L and D were 95.9±23.9 , 19.5±4.7 , 98.0±23.8 , 20.6±6.3 cm y 13.7±3.1 cm. A multiple linear regression analysis was done to determine the equations, using BW, L and Z data as parameters. Coefficient of determination (R2) of the equations obtained to estimate water (g), CP (g), fat (g), ash (g) and GE (MJ) content were: 0.99, 0.99, 0.95, 0.96 and 0.98, and relative mean prediction error (RMPE) were: 4.20, 5.48, 21.9, 9.10 and 6.77%, respectively. When water content was expressed as percentage, the R2 and RMPE were 0.79 and 1.62%, respectively. When prediction of protein (%DM), fat (%DM), ash (%DM) and energy (kJ/100 g DM) content was done, R2 values were 0.68, 0.76, 0.66 and 0.82, and RMPE: 3.22, 10.5, 5.82 and 2.54%, respectively. Reactance was positively correlated with fat content (r = 0.24, P < 0.05) while resistance was positively correlated with water, ash and protein carcass content (r = 0.55, P < 0.001; r = 0.54, P < 0.001; r = 0.40, P < 0.005) and negatively correlated with fat and energy (r = -0.44 and r = -0.55; P < 0.001). Moreover, age was negatively correlated with water, ash and CP content (r = -0.97, r = -0.95 and r = -0.89, P < 0.0001) and positively correlated with fat and GE (r = 0.95 and r = 0.97; P < 0.0001). In the whole growing period (35-63 d), overall energy retention efficiency (ERE) and nitrogen retention efficiency (NRE) were studied. The ERE values were 20.4±7.29%, 21.0±4.18% and 20.8±2.79%, from 35 to 49, 49 to 63 and from 35 to 63 d, respectively. NRE was 46.9±11.7%, 34.5±7.32% and 39.1±3.23% for the same periods. Energy was retained in body tissues for growth with an efficiency of approximately 52.5% and efficiency of the energy for protein and fat retention was 33.3 and 69.9%, respectively. Efficiency of utilization of nitrogen for growth was near to 77%. This work shows that BIA it’s a non-invasive and good method to estimate in vivo carcass composition and nutrient retention of growing rabbits from 25 to 77 days of age. In the third study, two experiments were conducted to investigate the effect of the fat addition and source, on performance, mortality, nutrient retention, and the whole body and carcass chemical composition of growing rabbits from 34 to 63 d. In Exp. 1 three diets were arranged in a 3 x 2 factorial structure with the source of fat: Soybean oil (SBO), Soya Lecithin Oil (SLO) and Lard (L) and the dietary fat inclusion level (1.5 and 4%) as the main factors. Exp. 2 had also arranged as a 3 x 2 factorial design, but using SBO, Fish Oil (FO) and Palmkernel Oil (PKO) as fat sources, and included at the same levels than in Exp. 1. In both experiments 180 animals were allocated in individual cages (n=30) and 600 in collectives cages, in groups of 5 animals (n=20). Animals fed with 4% dietary fat level showed lower DFI and FCR than those fed diets with 1.5%. In collective housing of Exp. 1, DFI was a 4.8% higher in animals fed with diets containing lard than SBO (P = 0.036), being intermediate for diet with SLO. Inclusion of lard also tended to reduce mortality (P = 0.067) around 60% and 25% with respect SBO and SLO diets, respectively. Mortality increased with the greatest level of soya lecithin (14% vs. 1%, P < 0.01). In Exp. 2 a decrease of DFI (11%), BW at 63 d (4.8%) and DWG (7.8%) were observed with the inclusion of fish oil with respect the other two diets (P < 0.01). These last two traits impaired with the highest level of fish oil (5.6 and 9.5%, respectively, (P < 0.01)). Animals housed individually showed similar performance results. The inclusion of fish oil also tended to increase (P = 0.078) mortality (13.2%) with respect palmkernel oil (6.45%), being mortality of SBO intermediate (8.10%). Fat source and level did not affect the whole body or carcass chemical composition. An increase of the fat sources addition led to a decrease of the digestible nitrogen intake (DNi) (1.83 vs. 1.92 g/d; P = 0.068 in Exp. 1 and 1.79 vs. 1.95 g/d; P = 0.014 in Exp. 2). As the nitrogen retained (NR) in the carcass was similar for both fat levels (0.68 g/d (Exp. 1) and 0.71 g/d (Exp. 2)), the overall efficiency of N retention (NRE) increased with the highest level of fat, but only reached significant level in Exp. 1 (34.9 vs. 37.8%; P < 0.0001), while in Exp. 2 a tendency was found (36.2 vs. 38.0% in Exp. 2; P < 0.064). Consequently, nitrogen excretion in faeces was lower in animals fed with the highest level of fat (0.782 vs. 0.868 g/d; P = 0.0001 in Exp. 1, and 0.745 vs. 0.865 g/d; P < 0.0001 in Exp.2). The same effect was observed with the nitrogen excreted as urine (0.702 vs. 0.822 g/d; P < 0.0001 in Exp. 1 and 0.694 vs. 0.7999 g/d; P = 0.014 in Exp.2). Although there were not differences in ERE, the energy excreted in faeces decreased as fat level increased (142 vs. 156 Kcal/d; P = 0.0004 in Exp. 1 and 144 vs. 154 g/d; P = 0.050 in Exp. 2). In Exp. 1 the energy excreted as urine and heat production was significantly higher when animals were fed with the highest level of dietary fat (216 vs. 204 Kcal/d; P < 0.017). It can be concluded that lard and palmkernel oil can be considered as alternative sources to soybean oil due to the reduction of the mortality, without negative effects on performances or nutrient retention. Inclusion of fish impaired animals´ productivity and mortality. An increase of the dietary fat level improved FCR and overall protein efficiency retention.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although most of the research on Cognitive Radio is focused on communication bands above the HF upper limit (30 MHz), Cognitive Radio principles can also be applied to HF communications to make use of the extremely scarce spectrum more efficiently. In this work we consider legacy users as primary users since these users transmit without resorting to any smart procedure, and our stations using the HFDVL (HF Data+Voice Link) architecture as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work to make short-term predictions of the sojourn time of a primary user in the band and avoid collisions. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes and are trained with real measurements of the 14 MHz band. By using the proposed HMM based model, the prediction model achieves an average 10.3% prediction error rate with one minute-long channel knowledge but it can be reduced when this knowledge is extended: with the previous 8 min knowledge, an average 5.8% prediction error rate is achieved. These results suggest that the resulting activity model for the HF band could actually be used to predict primary users activity and included in a future HF cognitive radio based station.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Uno de los procesos de desarrollo más comunes para llevar a cabo un proyecto arquitectónico es el ensayo y error. Un proceso de selección de pruebas que se suele abordar de dos maneras, o bien se efectúa con el fin de ir depurando una posición más óptima, o bien sirve para explorar nuevas vías de investigación. Con el fin de profundizar en esto, el artículo presenta el análisis de dos diferentes procesos de proyecto de viviendas desarrolladas por ensayo y error, obras referenciales en la historia de la arquitectura, la Villa Stonborough de Wittgenstein y la Villa Moller de Adolf Loos. Ambas aunque pertenecientes al mismo periodo histórico, están desarrolladas de maneras muy opuestas, casi enfrentadas. De su estudio se pretende localizar los conceptos que han impulsado sus diferentes vías de producción, para poder extrapolados a otros casos similares. ABSTRACT: One of the most common processes to develop an architectonic project is the trial and error method. The process of selection of tests is usually done on two different ways. Or it is done with the goal to find out the most optimized position, or it is used to explore new ways of research. In order to investigate this item, the article shows the analysis of two different processes of housing projects that have been done by trial and error. Constructions, that are references in the history of architecture, the Villa Stonborough by Wittgenstein and the Villa Moller by Adolf Loos. Although both of them belong to the same historical period, they are developed by different ways, almost confronted. Thanks to this analysis we will attempt to localize the concepts that drove into their different way of production and then we will try to extrapolate these properties to other similar cases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We apply diffusion strategies to propose a cooperative reinforcement learning algorithm, in which agents in a network communicate with their neighbors to improve predictions about their environment. The algorithm is suitable to learn off-policy even in large state spaces. We provide a mean-square-error performance analysis under constant step-sizes. The gain of cooperation in the form of more stability and less bias and variance in the prediction error, is illustrated in the context of a classical model. We show that the improvement in performance is especially significant when the behavior policy of the agents is different from the target policy under evaluation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a “signature” from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este proyecto se incluye en una línea de trabajo que tiene como objetivo final la optimización de la energía consumida por un dispositivo portátil multimedia mediante la aplicación de técnicas de control realimentado, a partir de una modificación dinámica de la frecuencia de trabajo del procesador y de su tensión de alimentación. La modificación de frecuencia y tensión se realiza a partir de la información de realimentación acerca de la potencia consumida por el dispositivo, lo que supone un problema ya que no suele ser posible la monitorización del consumo de potencia en dispositivos de estas características. Este es el motivo por el que se recurre a la estimación del consumo de potencia, utilizando para ello un modelo de predicción. A partir del número de veces que se producen ciertos eventos en el procesador del dispositivo, el modelo de predicción es capaz de obtener una estimación de la potencia consumida por dicho dispositivo. El trabajo llevado a cabo en este proyecto se centra en la implementación de un modelo de estimación de potencia en el kernel de Linux. La razón por la que la estimación se implementa en el sistema operativo es, en primer lugar para lograr un acceso directo a los contadores del procesador. En segundo lugar, para facilitar la modificación de frecuencia y tensión, una vez obtenida la estimación de potencia, ya que esta también se realiza desde el sistema operativo. Otro motivo para implementar la estimación en el sistema operativo, es que la estimación debe ser independiente de las aplicaciones de usuario. Además, el proceso de estimación se realiza de forma periódica, lo que sería difícil de lograr si no se trabajase desde el sistema operativo. Es imprescindible que la estimación se haga de forma periódica ya que al ser dinámica la modificación de frecuencia y tensión que se pretende implementar, se necesita conocer el consumo de potencia del dispositivo en todo momento. Cabe destacar también, que los algoritmos de control se tienen que diseñar sobre un patrón periódico de actuación. El modelo de estimación de potencia funciona de manera específica para el perfil de consumo generado por una única aplicación determinada, que en este caso es un decodificador de vídeo. Sin embargo, es necesario que funcione de la forma más precisa posible para cada una de las frecuencias de trabajo del procesador, y para el mayor número posible de secuencias de vídeo. Esto es debido a que las sucesivas estimaciones de potencia se pretenden utilizar para llevar a cabo la modificación dinámica de frecuencia, por lo que el modelo debe ser capaz de continuar realizando las estimaciones independientemente de la frecuencia con la que esté trabajando el dispositivo. Para valorar la precisión del modelo de estimación se toman medidas de la potencia consumida por el dispositivo a las distintas frecuencias de trabajo durante la ejecución del decodificador de vídeo. Estas medidas se comparan con las estimaciones de potencia obtenidas durante esas mismas ejecuciones, obteniendo de esta forma el error de predicción cometido por el modelo y realizando las modificaciones y ajustes oportunos en el mismo. ABSTRACT. This project is included in a work line which tries to optimize consumption of handheld multimedia devices by the application of feedback control techniques, from a dynamic modification of the processor work frequency and its voltage. The frequency and voltage modification is performed depending on the feedback information about the device power consumption. This is a problem because normally it is not possible to monitor the power consumption on this kind of devices. This is the reason why a power consumption estimation is used instead, which is obtained from a prediction model. Using the number of times some events occur on the device processor, the prediction model is able to obtain a power consumption estimation of this device. The work done in this project focuses on the implementation of a power estimation model in the Linux kernel. The main reason to implement the estimation in the operating system is to achieve a direct access to the processor counters. The second reason is to facilitate the frequency and voltage modification, because this modification is also done from the operating system. Another reason to implement the estimation in the operating system is because the estimation must be done apart of the user applications. Moreover, the estimation process is done periodically, what is difficult to obtain outside the operating system. It is necessary to make the estimation in a periodic way because the frequency and voltage modification is going to be dynamic, so it needs to know the device power consumption at every time. Also, it is important to say that the control algorithms have to be designed over a periodic pattern of action. The power estimation model works specifically for the consumption profile generated by a single application, which in this case is a video decoder. Nevertheless, it is necessary that the model works as accurate as possible for each frequency available on the processor, and for the greatest number of video sequences. This is because the power estimations are going to be used to modify dynamically the frequency, so the model must be able to work independently of the device frequency. To value the estimation model precision, some measurements of the device power consumption are taken at different frequencies during the video decoder execution. These measurements are compared with the power estimations obtained during that execution, getting the prediction error committed by the model, and if it is necessary, making modifications and settings on this model.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A finite element model was used to simulate timberbeams with defects and predict their maximum load in bending. Taking into account the elastoplastic constitutive law of timber, the prediction of fracture load gives information about the mechanisms of timber failure, particularly with regard to the influence of knots, and their local graindeviation, on the fracture. A finite element model was constructed using the ANSYS element Plane42 in a plane stress 2D-analysis, which equates thickness to the width of the section to create a mesh which is as uniform as possible. Three sub-models reproduced the bending test according to UNE EN 408: i) timber with holes caused by knots; ii) timber with adherent knots which have structural continuity with the rest of the beam material; iii) timber with knots but with only partial contact between knot and beam which was artificially simulated by means of contact springs between the two materials. The model was validated using ten 45 × 145 × 3000 mm beams of Pinus sylvestris L. which presented knots and graindeviation. The fracture stress data obtained was compared with the results of numerical simulations, resulting in an adjustment error less of than 9.7%

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three methodologies to assess As bioaccessibility were evaluated using playgroundsoil collected from 16 playgrounds in Madrid, Spain: two (Simplified Bioaccessibility Extraction Test: SBET, and hydrochloric acid-extraction: HCl) assess gastric-only bioaccessibility and the third (Physiologically Based Extraction Test: PBET) evaluates mouth–gastric–intestinal bioaccessibility. Aqua regia-extractable (pseudo total) As contents, which are routinely employed in riskassessments, were used as the reference to establish the following percentages of bioaccessibility: SBET – 63.1; HCl – 51.8; PBET – 41.6, the highest values associated with the gastric-only extractions. For Madridplaygroundsoils – characterised by a very uniform, weakly alkaline pH, and low Fe oxide and organic matter contents – the statistical analysis of the results indicates that, in contrast with other studies, the highest percentage of As in the samples was bound to carbonates and/or present as calcium arsenate. As opposed to the As bound to Fe oxides, this As is readily released in the gastric environment as the carbonate matrix is decomposed and calcium arsenate is dissolved, but some of it is subsequently sequestered in unavailable forms as the pH is raised to 5.5 to mimic intestinal conditions. The HCl extraction can be used as a simple and reliable (i.e. low residual standard error) proxy for the more expensive, time consuming, and error-prone PBET methodology. The HCl method would essentially halve the estimate of carcinogenic risk for children playing in Madridplaygroundsoils, providing a more representative value of associated risk than the pseudo-total concentrations used at present

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining as accurate as possible spent nuclear fuel isotopic content is gaining importance due to its safety and economic implications. Since nowadays higher burn ups are achievable through increasing initial enrichments, more efficient burn up strategies within the reactor cores and the extension of the irradiation periods, establishing and improving computation methodologies is mandatory in order to carry out reliable criticality and isotopic prediction calculations. Several codes (WIMSD5, SERPENT 1.1.7, SCALE 6.0, MONTEBURNS 2.0 and MCNP-ACAB) and methodologies are tested here and compared to consolidated benchmarks (OECD/NEA pin cell moderated with light water) with the purpose of validating them and reviewing the state of the isotopic prediction capabilities. These preliminary comparisons will suggest what can be generally expected of these codes when applied to real problems. In the present paper, SCALE 6.0 and MONTEBURNS 2.0 are used to model the same reported geometries, material compositions and burn up history of the Spanish Van de llós II reactor cycles 7-11 and to reproduce measured isotopies after irradiation and decay times. We analyze comparisons between measurements and each code results for several grades of geometrical modelization detail, using different libraries and cross-section treatment methodologies. The power and flux normalization method implemented in MONTEBURNS 2.0 is discussed and a new normalization strategy is developed to deal with the selected and similar problems, further options are included to reproduce temperature distributions of the materials within the fuel assemblies and it is introduced a new code to automate series of simulations and manage material information between them. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, we have estimated uncertainties using our MCNP-ACAB system. This depletion code, which combines the neutron transport code MCNP and the inventory code ACAB, propagates the uncertainties in the nuclide inventory assessing the potential impact of uncertainties in the basic nuclear data: cross-section, decay data and fission yields

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new and effective method for reduction of truncation errors in partial spherical near-field (SNF) measurements is proposed. The method is useful when measuring electrically large antennas, where the measurement time with the classical SNF technique is prohibitively long and an acquisition over the whole spherical surface is not practical. Therefore, to reduce the data acquisition time, partial sphere measurement is usually made, taking samples over a portion of the spherical surface in the direction of the main beam. But in this case, the radiation pattern is not known outside the measured angular sector as well as a truncation error is present in the calculated far-field pattern within this sector. The method is based on the Gerchberg-Papoulis algorithm used to extrapolate functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward hemisphere. To verify the effectiveness of the method, several examples are presented using both simulated and measured truncated near-field data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Salamanca has been considered among the most polluted cities in Mexico. The vehicular park, the industry and the emissions produced by agriculture, as well as orography and climatic characteristics have propitiated the increment in pollutant concentration of Particulate Matter less than 10 μg/m3 in diameter (PM10). In this work, a Multilayer Perceptron Neural Network has been used to make the prediction of an hour ahead of pollutant concentration. A database used to train the Neural Network corresponds to historical time series of meteorological variables (wind speed, wind direction, temperature and relative humidity) and air pollutant concentrations of PM10. Before the prediction, Fuzzy c-Means clustering algorithm have been implemented in order to find relationship among pollutant and meteorological variables. These relationship help us to get additional information that will be used for predicting. Our experiments with the proposed system show the importance of this set of meteorological variables on the prediction of PM10 pollutant concentrations and the neural network efficiency. The performance estimation is determined using the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The results shown that the information obtained in the clustering step allows a prediction of an hour ahead, with data from past 2 hours