24 resultados para Minimum Mean Square Error of Intensity Distribution

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La caracterización de los cultivos cubierta (cover crops) puede permitir comparar la idoneidad de diferentes especies para proporcionar servicios ecológicos como el control de la erosión, el reciclado de nutrientes o la producción de forrajes. En este trabajo se estudiaron bajo condiciones de campo diferentes técnicas para caracterizar el dosel vegetal con objeto de establecer una metodología para medir y comparar las arquitecturas de los cultivos cubierta más comunes. Se estableció un ensayo de campo en Madrid (España central) para determinar la relación entre el índice de área foliar (LAI) y la cobertura del suelo (GC) para un cultivo de gramínea, uno de leguminosa y uno de crucífera. Para ello se sembraron doce parcelas con cebada (Hordeum vulgare L.), veza (Vicia sativa L.), y colza (Brassica napus L.). En 10 fechas de muestreo se midieron el LAI (con estimaciones directas y del LAI-2000), la fracción interceptada de la radiación fotosintéticamente activa (FIPAR) y la GC. Un experimento de campo de dos años (Octubre-Abril) se estableció en la misma localización para evaluar diferentes especies (Hordeum vulgare L., Secale cereale L., x Triticosecale Whim, Sinapis alba L., Vicia sativa L.) y cultivares (20) en relación con su idoneidad para ser usadas como cultivos cubierta. La GC se monitorizó mediante análisis de imágenes digitales con 21 y 22 muestreos, y la biomasa se midió 8 y 10 veces, respectivamente para cada año. Un modelo de Gompertz caracterizó la cobertura del suelo hasta el decaimiento observado tras las heladas, mientras que la biomasa se ajustó a ecuaciones de Gompertz, logísticas y lineales-exponenciales. Al final del experimento se determinaron el C, el N y el contenido en fibra (neutrodetergente, ácidodetergente y lignina), así como el N fijado por las leguminosas. Se aplicó el análisis de decisión multicriterio (MCDA) con objeto de obtener un ranking de especies y cultivares de acuerdo con su idoneidad para actuar como cultivos cubierta en cuatro modalidades diferentes: cultivo de cobertura, cultivo captura, abono verde y forraje. Las asociaciones de cultivos leguminosas con no leguminosas pueden afectar al crecimiento radicular y a la absorción de N de ambos componentes de la mezcla. El conocimiento de cómo los sistemas radiculares específicos afectan al crecimiento individual de las especies es útil para entender las interacciones en las asociaciones, así como para planificar estrategias de cultivos cubierta. En un tercer ensayo se combinaron estudios en rhizotrones con extracción de raíces e identificación de especies por microscopía, así como con estudios de crecimiento, absorción de N y 15N en capas profundas del suelo. Las interacciones entre raíces en su crecimiento y en el aprovisionamiento de N se estudiaron para dos de los cultivares mejor valorados en el estudio previo: uno de cebada (Hordeum vulgare L. cv. Hispanic) y otro de veza (Vicia sativa L. cv. Aitana). Se añadió N en dosis de 0 (N0), 50 (N1) y 150 (N2) kg N ha-1. Como resultados del primer estudio, se ajustaron correctamente modelos lineales y cuadráticos a la relación entre la GC y el LAI para todos los cultivos, pero en la gramínea alcanzaron una meseta para un LAI>4. Antes de alcanzar la cobertura total, la pendiente de la relación lineal entre ambas variables se situó en un rango entre 0.025 y 0.030. Las lecturas del LAI-2000 estuvieron correlacionadas linealmente con el LAI, aunque con tendencia a la sobreestimación. Las correcciones basadas en el efecto de aglutinación redujeron el error cuadrático medio del LAI estimado por el LAI-2000 desde 1.2 hasta 0.5 para la crucífera y la leguminosa, no siendo efectivas para la cebada. Esto determinó que para los siguientes estudios se midieran únicamente la GC y la biomasa. En el segundo experimento, las gramíneas alcanzaron la mayor cobertura del suelo (83-99%) y la mayor biomasa (1226-1928 g m-2) al final del mismo. Con la mayor relación C/N (27-39) y contenido en fibra digestible (53-60%) y la menor calidad de residuo (~68%). La mostaza presentó elevadas GC, biomasa y absorción de N en el año más templado en similitud con las gramíneas, aunque escasa calidad como forraje en ambos años. La veza presentó la menor absorción de N (2.4-0.7 g N m-2) debido a la fijación de N (9.8-1.6 g N m-2) y escasa acumulación de N. El tiempo térmico hasta alcanzar el 30% de GC constituyó un buen indicador de especies de rápida cubrición. La cuantificación de las variables permitió hallar variabilidad entre las especies y proporcionó información para posteriores decisiones sobre la selección y manejo de los cultivos cubierta. La agregación de dichas variables a través de funciones de utilidad permitió confeccionar rankings de especies y cultivares para cada uso. Las gramíneas fueron las más indicadas para los usos de cultivo de cobertura, cultivo captura y forraje, mientras que las vezas fueron las mejor como abono verde. La mostaza alcanzó altos valores como cultivo de cobertura y captura en el primer año, pero el segundo decayó debido a su pobre actuación en los inviernos fríos. Hispanic fue el mejor cultivar de cebada como cultivo de cobertura y captura, mientras que Albacete como forraje. El triticale Titania alcanzó la posición más alta como cultiva de cobertura, captura y forraje. Las vezas Aitana y BGE014897 mostraron buenas aptitudes como abono verde y cultivo captura. El MCDA permitió la comparación entre especies y cultivares proporcionando información relevante para la selección y manejo de cultivos cubierta. En el estudio en rhizotrones tanto la mezcla de especies como la cebada alcanzaron mayor intensidad de raíces (RI) y profundidad (RD) que la veza, con valores alrededor de 150 cruces m-1 y 1.4 m respectivamente, comparados con 50 cruces m-1 y 0.9 m para la veza. En las capas más profundas del suelo, la asociación de cultivos mostró valores de RI ligeramente mayores que la cebada en monocultivo. La cebada y la asociación obtuvieron mayores valores de densidad de raíces (RLD) (200-600 m m-3) que la veza (25-130) entre 0.8 y 1.2 m de profundidad. Los niveles de N no mostraron efectos claros en RI, RD ó RLD, sin embargo, el incremento de N favoreció la proliferación de raíces de veza en la asociación en capas profundas del suelo, con un ratio cebada/veza situado entre 25 a N0 y 5 a N2. La absorción de N de la cebada se incrementó en la asociación a expensas de la veza (de ~100 a 200 mg planta-1). Las raíces de cebada en la asociación absorbieron también más nitrógeno marcado de las capas profundas del suelo (0.6 mg 15N planta-1) que en el monocultivo (0.3 mg 15N planta-1). ABSTRACT Cover crop characterization may allow comparing the suitability of different species to provide ecological services such as erosion control, nutrient recycling or fodder production. Different techniques to characterize plant canopy were studied under field conditions in order to establish a methodology for measuring and comparing cover crops canopies. A field trial was established in Madrid (central Spain) to determine the relationship between leaf area index (LAI) and ground cover (GC) in a grass, a legume and a crucifer crop. Twelve plots were sown with either barley (Hordeum vulgare L.), vetch (Vicia sativa L.), or rape (Brassica napus L.). On 10 sampling dates the LAI (both direct and LAI-2000 estimations), fraction intercepted of photosynthetically active radiation (FIPAR) and GC were measured. A two-year field experiment (October-April) was established in the same location to evaluate different species (Hordeum vulgare L., Secale cereale L., x Triticosecale Whim, Sinapis alba L., Vicia sativa L.) and cultivars (20) according to their suitability to be used as cover crops. GC was monitored through digital image analysis with 21 and 22 samples, and biomass measured 8 and 10 times, respectively for each season. A Gompertz model characterized ground cover until the decay observed after frosts, while biomass was fitted to Gompertz, logistic and linear-exponential equations. At the end of the experiment C, N, and fiber (neutral detergent, acid and lignin) contents, and the N fixed by the legumes were determined. Multicriteria decision analysis (MCDA) was applied in order to rank the species and cultivars according to their suitability to perform as cover crops in four different modalities: cover crop, catch crop, green manure and fodder. Intercropping legumes and non-legumes may affect the root growth and N uptake of both components in the mixture. The knowledge of how specific root systems affect the growth of the individual species is useful for understanding the interactions in intercrops as well as for planning cover cropping strategies. In a third trial rhizotron studies were combined with root extraction and species identification by microscopy and with studies of growth, N uptake and 15N uptake from deeper soil layers. The root interactions of root growth and N foraging were studied for two of the best ranked cultivars in the previous study: a barley (Hordeum vulgare L. cv. Hispanic) and a vetch (Vicia sativa L. cv. Aitana). N was added at 0 (N0), 50 (N1) and 150 (N2) kg N ha-1. As a result, linear and quadratic models fitted to the relationship between the GC and LAI for all of the crops, but they reached a plateau in the grass when the LAI > 4. Before reaching full cover, the slope of the linear relationship between both variables was within the range of 0.025 to 0.030. The LAI-2000 readings were linearly correlated with the LAI but they tended to overestimation. Corrections based on the clumping effect reduced the root mean square error of the estimated LAI from the LAI-2000 readings from 1.2 to less than 0.50 for the crucifer and the legume, but were not effective for barley. This determined that in the following studies only the GC and biomass were measured. In the second experiment, the grasses reached the highest ground cover (83- 99%) and biomass (1226-1928 g/m2) at the end of the experiment. The grasses had the highest C/N ratio (27-39) and dietary fiber (53-60%) and the lowest residue quality (~68%). The mustard presented high GC, biomass and N uptake in the warmer year with similarity to grasses, but low fodder capability in both years. The vetch presented the lowest N uptake (2.4-0.7 g N/m2) due to N fixation (9.8-1.6 g N/m2) and low biomass accumulation. The thermal time until reaching 30% ground cover was a good indicator of early coverage species. Variable quantification allowed finding variability among the species and provided information for further decisions involving cover crops selection and management. Aggregation of these variables through utility functions allowed ranking species and cultivars for each usage. Grasses were the most suitable for the cover crop, catch crop and fodder uses, while the vetches were the best as green manures. The mustard attained high ranks as cover and catch crop the first season, but the second decayed due to low performance in cold winters. Hispanic was the most suitable barley cultivar as cover and catch crop, and Albacete as fodder. The triticale Titania attained the highest rank as cover and catch crop and fodder. Vetches Aitana and BGE014897 showed good aptitudes as green manures and catch crops. MCDA allowed comparison among species and cultivars and might provide relevant information for cover crops selection and management. In the rhizotron study the intercrop and the barley attained slightly higher root intensity (RI) and root depth (RD) than the vetch, with values around 150 crosses m-1 and 1.4 m respectively, compared to 50 crosses m-1 and 0.9 m for the vetch. At deep soil layers, intercropping showed slightly larger RI values compared to the sole cropped barley. The barley and the intercropping had larger root length density (RLD) values (200-600 m m-3) than the vetch (25-130) at 0.8-1.2 m depth. The topsoil N supply did not show a clear effect on the RI, RD or RLD; however increasing topsoil N favored the proliferation of vetch roots in the intercropping at deep soil layers, with the barley/vetch root ratio ranging from 25 at N0 to 5 at N2. The N uptake of the barley was enhanced in the intercropping at the expense of the vetch (from ~100 mg plant-1 to 200). The intercropped barley roots took up more labeled nitrogen (0.6 mg 15N plant-1) than the sole-cropped barley roots (0.3 mg 15N plant-1) from deep layers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El estudio de la fiabilidad de componentes y sistemas tiene gran importancia en diversos campos de la ingenieria, y muy concretamente en el de la informatica. Al analizar la duracion de los elementos de la muestra hay que tener en cuenta los elementos que no fallan en el tiempo que dure el experimento, o bien los que fallen por causas distintas a la que es objeto de estudio. Por ello surgen nuevos tipos de muestreo que contemplan estos casos. El mas general de ellos, el muestreo censurado, es el que consideramos en nuestro trabajo. En este muestreo tanto el tiempo hasta que falla el componente como el tiempo de censura son variables aleatorias. Con la hipotesis de que ambos tiempos se distribuyen exponencialmente, el profesor Hurt estudio el comportamiento asintotico del estimador de maxima verosimilitud de la funcion de fiabilidad. En principio parece interesante utilizar metodos Bayesianos en el estudio de la fiabilidad porque incorporan al analisis la informacion a priori de la que se dispone normalmente en problemas reales. Por ello hemos considerado dos estimadores Bayesianos de la fiabilidad de una distribucion exponencial que son la media y la moda de la distribucion a posteriori. Hemos calculado la expansion asint6tica de la media, varianza y error cuadratico medio de ambos estimadores cuando la distribuci6n de censura es exponencial. Hemos obtenido tambien la distribucion asintotica de los estimadores para el caso m3s general de que la distribucion de censura sea de Weibull. Dos tipos de intervalos de confianza para muestras grandes se han propuesto para cada estimador. Los resultados se han comparado con los del estimador de maxima verosimilitud, y con los de dos estimadores no parametricos: limite producto y Bayesiano, resultando un comportamiento superior por parte de uno de nuestros estimadores. Finalmente nemos comprobado mediante simulacion que nuestros estimadores son robustos frente a la supuesta distribuci6n de censura, y que uno de los intervalos de confianza propuestos es valido con muestras pequenas. Este estudio ha servido tambien para confirmar el mejor comportamiento de uno de nuestros estimadores. SETTING OUT AND SUMMARY OF THE THESIS When we study the lifetime of components it's necessary to take into account the elements that don't fail during the experiment, or those that fail by reasons which are desirable to exclude from consideration. The model of random censorship is very usefull for analysing these data. In this model the time to failure and the time censor are random variables. We obtain two Bayes estimators of the reliability function of an exponential distribution based on randomly censored data. We have calculated the asymptotic expansion of the mean, variance and mean square error of both estimators, when the censor's distribution is exponential. We have obtained also the asymptotic distribution of the estimators for the more general case of censor's Weibull distribution. Two large-sample confidence bands have been proposed for each estimator. The results have been compared with those of the maximum likelihood estimator, and with those of two non parametric estimators: Product-limit and Bayesian. One of our estimators has the best behaviour. Finally we have shown by simulation, that our estimators are robust against the assumed censor's distribution, and that one of our intervals does well in small sample situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the effect of packet losses in video sequences and propose a lightweight Unequal Error Protection strategy which, by choosing which packet is discarded, reduces strongly the Mean Square Error of the received sequence

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Canopy characterization is essential for describing the interaction of a crop with its environment. The goal of this work was to determine the relationship between leaf area index (LAI) and ground cover (GC) in a grass, a legume and a crucifer crop, and to assess the feasibility of using these relationships as well as LAI-2000 readings to estimate LAI. Twelve plots were sown with either barley (Hordeum vulgare L.), vetch (Vicia sativa L.), or rape (Brassica napus L.). On 10 sampling dates the LAI (both direct and LAI-2000 estimations), fraction intercepted of photosynthetically active radiation (FIPAR) and GC were measured. Linear and quadratic models fitted to the relationship between the GC and LAI for all of the crops, but they reached a plateau in the grass when the LAI mayor que 4. Before reaching full cover, the slope of the linear relationship between both variables was within the range of 0.025 to 0.030. The LAI-2000 readings were linearly correlated with the LAI but they tended to overestimation. Corrections based on the clumping effect reduced the root mean square error of the estimated LAI from the LAI-2000 readings from 1.2 to less than 0.50 for the crucifer and the legume, but were not effective for barley.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Illumination uniformity of a spherical capsule directly driven by laser beams has been assessed numerically. Laser facilities characterized by ND = 12, 20, 24, 32, 48 and 60 directions of irradiation with associated a single laser beam or a bundle of NB laser beams have been considered. The laser beam intensity profile is assumed super-Gaussian and the calculations take into account beam imperfections as power imbalance and pointing errors. The optimum laser intensity profile, which minimizes the root-mean-square deviation of the capsule illumination, depends on the values of the beam imperfections. Assuming that the NB beams are statistically independents is found that they provide a stochastic homogenization of the laser intensity associated to the whole bundle, reducing the errors associated to the whole bundle by the factor  , which in turn improves the illumination uniformity of the capsule. Moreover, it is found that the uniformity of the irradiation is almost the same for all facilities and only depends on the total number of laser beams Ntot = ND × NB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Salamanca has been considered among the most polluted cities in Mexico. The vehicular park, the industry and the emissions produced by agriculture, as well as orography and climatic characteristics have propitiated the increment in pollutant concentration of Particulate Matter less than 10 μg/m3 in diameter (PM10). In this work, a Multilayer Perceptron Neural Network has been used to make the prediction of an hour ahead of pollutant concentration. A database used to train the Neural Network corresponds to historical time series of meteorological variables (wind speed, wind direction, temperature and relative humidity) and air pollutant concentrations of PM10. Before the prediction, Fuzzy c-Means clustering algorithm have been implemented in order to find relationship among pollutant and meteorological variables. These relationship help us to get additional information that will be used for predicting. Our experiments with the proposed system show the importance of this set of meteorological variables on the prediction of PM10 pollutant concentrations and the neural network efficiency. The performance estimation is determined using the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The results shown that the information obtained in the clustering step allows a prediction of an hour ahead, with data from past 2 hours

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Th e CERES-Maize model is the most widely used maize (Zea mays L.) model and is a recognized reference for comparing new developments in maize growth, development, and yield simulation. Th e objective of this study was to present and evaluate CSMIXIM, a new maize simulation model for DSSAT version 4.5. Code from CSM-CERES-Maize, the modular version of the model, was modifi ed to include a number of model improvements. Model enhancements included the simulation of leaf area, C assimilation and partitioning, ear growth, kernel number, grain yield, and plant N acquisition and distribution. Th e addition of two genetic coeffi cients to simulate per-leaf foliar surface produced 32% smaller root mean square error (RMSE) values estimating leaf area index than did CSM-CERES. Grain yield and total shoot biomass were correctly simulated by both models. Carbon partitioning, however, showed diff erences. Th e CSM-IXIM model simulated leaf mass more accurately, reducing the CSM-CERES error by 44%, but overestimated stem mass, especially aft er stress, resulting in similar average RMSE values as CSM-CERES. Excessive N uptake aft er fertilization events as simulated by CSM-CERES was also corrected, reducing the error by 16%. Th e accuracy of N distribution to stems was improved by 68%. Th ese improvements in CSM-IXIM provided a stable basis for more precise simulation of maize canopy growth and yield and a framework for continuing future model developments

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Objective assessment of motor skills has become an important challenge in minimally invasive surgery (MIS) training.Currently, there is no gold standard defining and determining the residents' surgical competence.To aid in the decision process, we analyze the validity of a supervised classifier to determine the degree of MIS competence based on assessment of psychomotor skills METHODOLOGY: The ANFIS is trained to classify performance in a box trainer peg transfer task performed by two groups (expert/non expert). There were 42 participants included in the study: the non-expert group consisted of 16 medical students and 8 residents (< 10 MIS procedures performed), whereas the expert group consisted of 14 residents (> 10 MIS procedures performed) and 4 experienced surgeons. Instrument movements were captured by means of the Endoscopic Video Analysis (EVA) tracking system. Nine motion analysis parameters (MAPs) were analyzed, including time, path length, depth, average speed, average acceleration, economy of area, economy of volume, idle time and motion smoothness. Data reduction was performed by means of principal component analysis, and then used to train the ANFIS net. Performance was measured by leave one out cross validation. RESULTS: The ANFIS presented an accuracy of 80.95%, where 13 experts and 21 non-experts were correctly classified. Total root mean square error was 0.88, while the area under the classifiers' ROC curve (AUC) was measured at 0.81. DISCUSSION: We have shown the usefulness of ANFIS for classification of MIS competence in a simple box trainer exercise. The main advantage of using ANFIS resides in its continuous output, which allows fine discrimination of surgical competence. There are, however, challenges that must be taken into account when considering use of ANFIS (e.g. training time, architecture modeling). Despite this, we have shown discriminative power of ANFIS for a low-difficulty box trainer task, regardless of the individual significances between MAPs. Future studies are required to confirm the findings, inclusion of new tasks, conditions and sample population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We apply diffusion strategies to propose a cooperative reinforcement learning algorithm, in which agents in a network communicate with their neighbors to improve predictions about their environment. The algorithm is suitable to learn off-policy even in large state spaces. We provide a mean-square-error performance analysis under constant step-sizes. The gain of cooperation in the form of more stability and less bias and variance in the prediction error, is illustrated in the context of a classical model. We show that the improvement in performance is especially significant when the behavior policy of the agents is different from the target policy under evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two different methods of analysis of plate bending, FEM and BM are discussed in this paper. The plate behaviour is assumed to be represented by using the linear thin plate theory where the Poisson-Kirchoff assumption holds. The BM based in a weighted mean square error technique produced good results for the problem of plate bending. The computational effort demanded in the BM is smaller than the one needed in a FEM analysis for the same level of accuracy. The general application of the FEM cannot be matched by the BM. Particularly, different types of geometry (plates of arbitrary geometry) need a similar but not identical treatment in the BM. However, this loss of generality is counterbalanced by the computational efficiency gained in the BM in the solution achievement

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The requirements for a good stand in a no-till field are the same as those for conventional planting as well as added field and machinery management. Among the various factors that contribute towards producing a successful maize crop, seed depth placement is a key determinant. Although most no-till planters on the market work well under good soil and residue conditions, adjustments and even modifications are frequently needed when working with compacted or wet soils or with heavy residues. The main objective of this study, carried out in 2010, 2011 and 2012, was to evaluate the vertical distribution and spatial variability of seed depth placement in a maize crop under no-till conditions, using precision farming technologies and conventional no-till seeders. The results obtained indicate that the seed depth placement was affected by soil moisture content and forward speed. The seed depth placement was negatively correlated with soil resistance and seeding depth had a significant impact on mean emergence time and the percentage of emerged plants. Shallow average depth values and high coefficients of variation suggest a need for improvements in controlling the seeders’ sowing depth mechanism or more accurate calibration by operators in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the biggest challenges that software developers face is to make an accurate estimate of the project effort. Radial basis function neural networks have been used to software effort estimation in this work using NASA dataset. This paper evaluates and compares radial basis function versus a regression model. The results show that radial basis function neural network have obtained less Mean Square Error than the regression method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Air pollution is a big threat and a phenomenon that has a specific impact on human health, in addition, changes that occur in the chemical composition of the atmosphere can change the weather and cause acid rain or ozone destruction. Those are phenomena of global importance. The World Health Organization (WHO) considerates air pollution as one of the most important global priorities. Salamanca, Gto., Mexico has been ranked as one of the most polluted cities in this country. The industry of the area led to a major economic development and rapid population growth in the second half of the twentieth century. The impact in the air quality is important and significant efforts have been made to measure the concentrations of pollutants. The main pollution sources are locally based plants in the chemical and power generation sectors. The registered concerning pollutants are Sulphur Dioxide (SO2) and particles on the order of ∼10 micrometers or less (PM10). The prediction in the concentration of those pollutants can be a powerful tool in order to take preventive measures such as the reduction of emissions and alerting the affected population. In this PhD thesis we propose a model to predict concentrations of pollutants SO2 and PM10 for each monitoring booth in the Atmospheric Monitoring Network Salamanca (REDMAS - for its spanish acronym). The proposed models consider the use of meteorological variables as factors influencing the concentration of pollutants. The information used along this work is the current real data from REDMAS. In the proposed model, Artificial Neural Networks (ANN) combined with clustering algorithms are used. The type of ANN used is the Multilayer Perceptron with a hidden layer, using separate structures for the prediction of each pollutant. The meteorological variables used for prediction were: Wind Direction (WD), wind speed (WS), Temperature (T) and relative humidity (RH). Clustering algorithms, K-means and Fuzzy C-means, are used to find relationships between air pollutants and weather variables under consideration, which are added as input of the RNA. Those relationships provide information to the ANN in order to obtain the prediction of the pollutants. The results of the model proposed in this work are compared with the results of a multivariate linear regression and multilayer perceptron neural network. The evaluation of the prediction is calculated with the mean absolute error, the root mean square error, the correlation coefficient and the index of agreement. The results show the importance of meteorological variables in the prediction of the concentration of the pollutants SO2 and PM10 in the city of Salamanca, Gto., Mexico. The results show that the proposed model perform better than multivariate linear regression and multilayer perceptron neural network. The models implemented for each monitoring booth have the ability to make predictions of air quality that can be used in a system of real-time forecasting and human health impact analysis. Among the main results of the development of this thesis we can cite: A model based on artificial neural network combined with clustering algorithms for prediction with a hour ahead of the concentration of each pollutant (SO2 and PM10) is proposed. A different model was designed for each pollutant and for each of the three monitoring booths of the REDMAS. A model to predict the average of pollutant concentration in the next 24 hours of pollutants SO2 and PM10 is proposed, based on artificial neural network combined with clustering algorithms. Model was designed for each booth of the REDMAS and each pollutant separately. Resumen La contaminación atmosférica es una amenaza aguda, constituye un fenómeno que tiene particular incidencia sobre la salud del hombre. Los cambios que se producen en la composición química de la atmósfera pueden cambiar el clima, producir lluvia ácida o destruir el ozono, fenómenos todos ellos de una gran importancia global. La Organización Mundial de la Salud (OMS) considera la contaminación atmosférica como una de las más importantes prioridades mundiales. Salamanca, Gto., México; ha sido catalogada como una de las ciudades más contaminadas en este país. La industria de la zona propició un importante desarrollo económico y un crecimiento acelerado de la población en la segunda mitad del siglo XX. Las afectaciones en el aire son graves y se han hecho importantes esfuerzos por medir las concentraciones de los contaminantes. Las principales fuentes de contaminación son fuentes fijas como industrias químicas y de generación eléctrica. Los contaminantes que se han registrado como preocupantes son el Bióxido de Azufre (SO2) y las Partículas Menores a 10 micrómetros (PM10). La predicción de las concentraciones de estos contaminantes puede ser una potente herramienta que permita tomar medidas preventivas como reducción de emisiones a la atmósfera y alertar a la población afectada. En la presente tesis doctoral se propone un modelo de predicción de concentraci ón de los contaminantes más críticos SO2 y PM10 para cada caseta de monitorización de la Red de Monitorización Atmosférica de Salamanca (REDMAS). Los modelos propuestos plantean el uso de las variables meteorol ógicas como factores que influyen en la concentración de los contaminantes. La información utilizada durante el desarrollo de este trabajo corresponde a datos reales obtenidos de la REDMAS. En el Modelo Propuesto (MP) se aplican Redes Neuronales Artificiales (RNA) combinadas con algoritmos de agrupamiento. La RNA utilizada es el Perceptrón Multicapa con una capa oculta, utilizando estructuras independientes para la predicción de cada contaminante. Las variables meteorológicas disponibles para realizar la predicción fueron: Dirección de Viento (DV), Velocidad de Viento (VV), Temperatura (T) y Humedad Relativa (HR). Los algoritmos de agrupamiento K-means y Fuzzy C-means son utilizados para encontrar relaciones existentes entre los contaminantes atmosféricos en estudio y las variables meteorológicas. Dichas relaciones aportan información a las RNA para obtener la predicción de los contaminantes, la cual es agregada como entrada de las RNA. Los resultados del modelo propuesto en este trabajo son comparados con los resultados de una Regresión Lineal Multivariable (RLM) y un Perceptrón Multicapa (MLP). La evaluación de la predicción se realiza con el Error Medio Absoluto, la Raíz del Error Cuadrático Medio, el coeficiente de correlación y el índice de acuerdo. Los resultados obtenidos muestran la importancia de las variables meteorológicas en la predicción de la concentración de los contaminantes SO2 y PM10 en la ciudad de Salamanca, Gto., México. Los resultados muestran que el MP predice mejor la concentración de los contaminantes SO2 y PM10 que los modelos RLM y MLP. Los modelos implementados para cada caseta de monitorizaci ón tienen la capacidad para realizar predicciones de calidad del aire, estos modelos pueden ser implementados en un sistema que permita realizar la predicción en tiempo real y analizar el impacto en la salud de la población. Entre los principales resultados obtenidos del desarrollo de esta tesis podemos citar: Se propone un modelo basado en una red neuronal artificial combinado con algoritmos de agrupamiento para la predicción con una hora de anticipaci ón de la concentración de cada contaminante (SO2 y PM10). Se diseñó un modelo diferente para cada contaminante y para cada una de las tres casetas de monitorización de la REDMAS. Se propone un modelo de predicción del promedio de la concentración de las próximas 24 horas de los contaminantes SO2 y PM10, basado en una red neuronal artificial combinado con algoritmos de agrupamiento. Se diseñó un modelo para cada caseta de monitorización de la REDMAS y para cada contaminante por separado.