10 resultados para Associative Algebras With Polynomial Identities
em Universidad Politécnica de Madrid
Resumo:
Las leguminosas grano presentan un perfil nutricional de gran interés para alimentación de ganado porcino, debido principalmente a su elevado contenido proteico. Sin embargo, la presencia de factores antinutritivos (FAN), que según el género difieren en calidad y cantidad, condiciona la absorción de la proteína, el nutriente más valorado. El objetivo de esta Tesis Doctoral ha sido el estudio del efecto de los principales FAN de guisante y alberjón sobre el rendimiento productivo, de canal y de piezas nobles, cuando sustituyen a la soja, parcial o totalmente, durante la fase estárter y el periodo de engorde de cerdos grasos. Con este motivo se llevaron a cabo 4 ensayos con machos castrados y la misma línea genética: híbrido Duroc x (Landrace x Large white). En el ensayo 1, se estudió la influencia de distintos niveles de inhibidores de proteasas (IP) en el pienso sobre la productividad de lechones durante la fase estárter (40 a 61 días de edad). Para ello, se utilizaron tres variedades de guisantes de invierno que contenían diferentes cantidades de IP, tanto de tripsina (IT) como de quimotripsina (IQ) [unidades de tripsina inhibida/mg (UTI), unidades de quimotripsina inhibida/mg (UQI): 9,87- 10,16, 5,75-8,62 y 12,55-15,75, para guisantes Cartouche, Iceberg y Luna, respectivamente] más elevadas que en la harina de soja 47 (HnaS) y en la soja extrusionada (SE) (UTI/mg - UQI/mg: 0,61-3,56 y 2,36-4,65, para HnaS y SE, respectivamente). El diseño experimental fue al azar, con cuatro tratamientos dietéticos que diferían en las fuentes proteicas y en la cantidad de IP, enfrentando un pienso control de soja a otros tres piensos con guisantes de invierno de las variedades indicadas, que sustituían parcialmente a la soja. Cada tratamiento se replicó cuatro veces, siendo la celda con 6 lechones la unidad experimental. Los animales que consumieron el pienso con guisante Cartouche tuvieron más ganancia media diaria (GMD) que el resto (P < 0,001) con el mismo consumo medio diario (CMD) e índice de conversión (IC). No hubo diferencias significativas entre los animales del pienso control y los que consumieron piensos con guisantes Iceberg y Luna. En el ensayo 2 la leguminosa objeto de estudio fue el alberjón y su FAN el dipéptido _Glutamyl-S-Ethenyl-Cysteine (GEC). El diseño y el periodo experimental fueron los mismos que en el ensayo 1, con cuatro dietas que variaban en el porcentaje de alberjones: 0%, 5%, 15% y 25%, y de GEC (1,54% del grano). Los lechones que consumieron el pienso con 5% tuvieron un CMD y GMD más elevado (P < 0,001), con el mismo IC que los animales pertenecientes al tratamiento 0%. Los índices productivos empeoraron significativamente y de manera progresiva al aumentar el porcentaje de alberjones (15 y 25%). Se obtuvieron ecuaciones de regresión con estructura polinomial que fueron significativas tanto para el nivel de alberjón como para la cantidad de GEC presente en el pienso. El ensayo 3 se efectuó durante el periodo de engorde, sustituyendo por completo la soja a partir de los 84 días de edad con las tres variedades de guisantes de invierno, observando el efecto sobre el rendimiento productivo, de canal y piezas nobles. El diseño, en bloques completos al azar, tuvo cuatro tratamientos según el guisante presente en el pienso y, por lo tanto, los niveles de IP: Control-soja, Cartouche, Iceberg y Luna, con 12 réplicas de 4 cerdos por tratamiento. De 84 a 108 días de edad los animales que consumieron los piensos Control-soja e Iceberg, tuvieron el mismo CMD y GMD, empeorando en los cerdos alimentados con Luna y Cartouche (P < 0,05). El IC fue igual en los tratamientos Control-soja e Iceberg, ocupando una posición intermedia en Cartouche y peor en los cerdos del pienso Luna (P < 0,001). De 109 a 127 días de edad la GMD y el IC fueron iguales, con un CMD más elevado en Control-soja e Iceberg que en los cerdos que consumieron Cartouche y Luna (P < 0,05). No hubo diferencias significativas durante el acabado (128 a 167 días de edad). Globalmente el CMD y GMD fueron más elevados en los cerdos que comieron los piensos Iceberg y Control-soja, empeorando por igual en los que comieron Cartouche y Luna (P < 0,05); el IC fue el mismo en todos los tratamientos. No se observaron diferencias en los datos relacionados con peso y rendimiento de canal y piezas nobles (jamón, paleta y chuletero), ni del contenido de grasa intramuscular en el lomo y proporción de ácidos grasos principales (C16:0, C18:0, C18:1n-9) en la grasa subcutánea. En el ensayo 4, realizado durante el periodo de engorde (60 a 171 días de edad), se valoró el efecto de dietas con distintos niveles de alberjones, y en consecuencia de su factor antinutritivo el dipéptido GEC, sobre el rendimiento productivo y la calidad de la canal y piezas nobles. El diseño fue en cuatro bloques completos al azar, con cuatro tratamientos según el porcentaje de inclusión de alberjón en el pienso: 0%, 5%, 15% y 25%, con 12 réplicas por tratamiento y cuatro cerdos en cada una de ellas. El tratamiento con 5% mejoró la GMD al final de la fase de cebo (152 días de vida) y, junto con el 0%, presentaron los resultados más favorables de peso e IC al final del ensayo (171 días de vida). Del mismo modo, el peso y rendimiento de canal fueron más elevados en los cerdos alimentados con los tratamientos 0% y 5% (P < 0,001). Piensos con el 15 y 25% de alberjones empeoraron los resultados productivos, así como el rendimiento y peso de canal. Sucedió lo mismo con el peso de las piezas nobles (jamón, paleta y chuletero), significativamente superior en 0% y 5% frente a 15% y 25%, siendo los cerdos que consumieron este último pienso los peores. Por el contrario el rendimiento de jamón y chuletero fue más elevado en los cerdos de los tratamientos 25% y 15% que en los que consumieron los piensos con 5% y 0% (P < 0,001); en el rendimiento de paletas se invirtieron los resultados, siendo mayores en los animales de los tratamientos 0% y 5% (P < 0,001). Se obtuvieron ecuaciones de regresión polinomial, para estimar las cantidades de inclusión de alberjones y de GEC más favorables desde el punto de vista productivo, así como los contrastes ortogonales entre los distintos tratamientos. ABSTRACT The grain legumes have a nutritional profile of great interest to feed pigs, mainly due to high protein content. However, the presence of antinutritional factors (ANF), which differ in quality and quantity according to gender, hinder the absorption of the protein, the most valuable nutrient. The aim of this thesis was to study the effect of the main ANF of pea and narbon vetch (NV) on productive performance, of the carcass and main lean cuts, when replacing soybean, partially or totally, during the starter phase and the fattening period of heavy pigs. For this reason were carried four trials with barrows and the same genetic line: Duroc hybrid x (Landrace x Large white). In trial 1, was studied the influence of different levels of protease inhibitors (PI) in the diet over productivity of piglets during the starter phase (40-61 days of age). For this, were used three varieties of winter peas containing different amounts of PI, both trypsin (TI) and chymotrypsin (CI) [inhibited units/mg trypsin (TIU), inhibited units/mg chymotrypsin (CIU): 9.87 - 10.16, 5.75 - 8.62 and 12.55 - 15.75, for peas Cartouche, Iceberg and Luna, respectively] higher than in soybean meal 47 (SBM) and soybeans extruded (SBE) (TIU/mg - CIU/mg: 0.61 - 3.56 and 2.36 - 4.65 for SBM and SBE, respectively). The design was randomized with four dietary treatments differing in protein sources and the amount of PI, with a control diet of soybean and three with different varieties of winter peas: Cartouche, Iceberg and Luna, which partially replace soybean. Each treatment was replicated four times, being the pen with 6 piglets the experimental unit. Pigs that ate the feed with pea Cartouche had better growth (ADG) than the rest (P < 0.001), with the same average daily feed intake (ADFI) and feed conversion ratio (FCR). There were no significant differences between piglets fed with control diet and those fed Iceberg and Luna diets. In trial 2 the legume under study was the NV and your ANF the dipeptide _Glutamyl FAN-S-Ethenyl-Cysteine (GEC). The experimental period and the design were the same as in trial 1, with four diets with different percentage of NV: 0%, 5%, 15% and 25%, and from GEC (1.52% of the grain). The piglets that consumed the feed containing 5% had higher ADG and ADFI (P < 0.05), with the same FCR that pigs belonging to the 0% treatment. Production rates worsened progressively with increasing percentage of NV (15 and 25%). Were obtained regression equations with polynomial structure that were significant for NV percentage and amount of GEC present in the feed. The test 3 was carried out during the fattening period, completely replace soy from 84 days of age with three varieties of winter peas, observing the effect on the yield, carcass and main lean cuts. The design, randomized complete blocks, had four treatments with different levels of PI: Control-soy, Cartouche, Iceberg and Luna, with 12 replicates of 4 pigs per treatment. From 84 to 108 days of age the pigs fed with Control-soy and Iceberg feed, had the same ADFI and ADG, worsening in pigs fed with Luna and Cartouche (P < 0.05). The FCR was similar in diets Control-soy and Iceberg, occupying an intermediate position in Cartouche and worse in pigs fed with Luna (P < 0.001). From 109-127 days of age the ADG and FCR were equal, with higher ADFI in pigs fed with Control-soy and Iceberg, regarding pigs fed with Cartouche and Luna (P < 0.05). There was no difference in the finishing phase (128-167 days of age). In global period, the ADFI and ADG were higher in pigs that ate Control-soy and Iceberg, and worse in those who ate Cartouche and Luna. The FCR was the same in all treatments. No significant differences were observed in the data related to weight and carcass yield, main lean cuts (ham, shoulder and loin chop) and intramuscular fat loin content and major fatty acids proportion (C16:0, C18:0, C18:1n-9) of subcutaneous fat. In experiment 4, made during the fattening period (60-171 days of age), was assessed the effect of diets with different levels of NV, and consequently of GEC, in the performance and quality of carcass and main lean cuts. There was a completely randomized design with four dietary treatments differing in percentage of NV: 0%, 5%, 15% and 25%, with 12 replicates per treatment and four pigs each. Treatment with 5% improved the ADG at the end of the fattening phase (152 days of age) and, together with 0%, showed the most favorable body weight and FCR at the end of the trial (171 days of age). Similarly, the weight and performance of carcass were higher for pigs fed with diets 0% and 5% (P < 0.05). Diets with 15 and 25% worsened the productive and carcass results. The weight of the main lean cuts (ham, shoulder and loin chop) was significantly higher in 0% and 5% vs 15% and 25%.The diet 25% was the worst of all. By contrast the performance of ham and loin chop was higher in pigs fed with diets 25% and 15%, that those who ate diets with 5% and 0% (P < 0.001); the results of shoulder performance were reversed, being greater in pigs feed with diets 0% and 5% (P < 0.001). Polynomial regression equations were obtained to estimate the percentage of NV and GEC more favorable from the point of view of production, and orthogonal contrasts between treatments.
Resumo:
This paper presents some ideas about a new neural network architecture that can be compared to a Taylor analysis when dealing with patterns. Such architecture is based on lineal activation functions with an axo-axonic architecture. A biological axo-axonic connection between two neurons is defined as the weight in a connection in given by the output of another third neuron. This idea can be implemented in the so called Enhanced Neural Networks in which two Multilayer Perceptrons are used; the first one will output the weights that the second MLP uses to computed the desired output. This kind of neural network has universal approximation properties even with lineal activation functions. There exists a clear difference between cooperative and competitive strategies. The former ones are based on the swarm colonies, in which all individuals share its knowledge about the goal in order to pass such information to other individuals to get optimum solution. The latter ones are based on genetic models, that is, individuals can die and new individuals are created combining information of alive one; or are based on molecular/celular behaviour passing information from one structure to another. A swarm-based model is applied to obtain the Neural Network, training the net with a Particle Swarm algorithm.
Resumo:
Social behavior is mainly based on swarm colonies, in which each individual shares its knowledge about the environment with other individuals to get optimal solutions. Such co-operative model differs from competitive models in the way that individuals die and are born by combining information of alive ones. This paper presents the particle swarm optimization with differential evolution algorithm in order to train a neural network instead the classic back propagation algorithm. The performance of a neural network for particular problems is critically dependant on the choice of the processing elements, the net architecture and the learning algorithm. This work is focused in the development of methods for the evolutionary design of artificial neural networks. This paper focuses in optimizing the topology and structure of connectivity for these networks
Resumo:
Let D be a link diagram with n crossings, sA and sB be its extreme states and |sAD| (respectively, |sBD|) be the number of simple closed curves that appear when smoothing D according to sA (respectively, sB). We give a general formula for the sum |sAD| + |sBD| for a k-almost alternating diagram D, for any k, characterizing this sum as the number of faces in an appropriate triangulation of an appropriate surface with boundary. When D is dealternator connected, the triangulation is especially simple, yielding |sAD| + |sBD| = n + 2 - 2k. This gives a simple geometric proof of the upper bound of the span of the Jones polynomial for dealternator connected diagrams, a result first obtained by Zhu [On Kauffman brackets, J. Knot Theory Ramifications6(1) (1997) 125–148.]. Another upper bound of the span of the Jones polynomial for dealternator connected and dealternator reduced diagrams, discovered historically first by Adams et al. [Almost alternating links, Topology Appl.46(2) (1992) 151–165.], is obtained as a corollary. As a new application, we prove that the Turaev genus is equal to the number k of dealternator crossings for any dealternator connected diagram
Resumo:
A new formalism, called Hiord, for defining type-free higherorder logic programming languages with predicate abstraction is introduced. A model theory, based on partial combinatory algebras, is presented, with respect to which the formalism is shown sound. A programming language built on a subset of Hiord, and its implementation are discussed. A new proposal for defining modules in this framework is considered, along with several examples.
Resumo:
The set agreement problem states that from n proposed values at most n?1 can be decided. Traditionally, this problem is solved using a failure detector in asynchronous systems where processes may crash but do not recover, where processes have different identities, and where all processes initially know the membership. In this paper we study the set agreement problem and the weakest failure detector L used to solve it in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities) and without a complete initial knowledge of the membership.
Resumo:
The first level data cache un modern processors has become a major consumer of energy due to its increasing size and high frequency access rate. In order to reduce this high energy con sumption, we propose in this paper a straightforward filtering technique based on a highly accurate forwarding predictor. Specifically, a simple structure predicts whether a load instruction will obtain its corresponding data via forwarding from the load-store structure -thus avoiding the data cache access - or if it will be provided by the data cache. This mechanism manages to reduce the data cache energy consumption by an average of 21.5% with a negligible performance penalty of less than 0.1%. Furthermore, in this paper we focus on the cache static energy consumption too by disabling a portin of sets of the L2 associative cache. Overall, when merging both proposals, the combined L1 and L2 total energy consumption is reduced by an average of 29.2% with a performance penalty of just 0.25%. Keywords: Energy consumption; filtering; forwarding predictor; cache hierarchy
Resumo:
Tissue P systems generalize the membrane structure tree usual in original models of P systems to an arbitrary graph. Basic opera- tions in these systems are communication rules, enriched in some variants with cell division or cell separation. Several variants of tissue P systems were recently studied, together with the concept of uniform families of these systems. Their computational power was shown to range between P and NP ? co-NP , thus characterizing some interesting borderlines between tractability and intractability. In this paper we show that com- putational power of these uniform families in polynomial time is limited by the class PSPACE . This class characterizes the power of many clas- sical parallel computing models
Resumo:
Purpose Concentrating Solar Power (CSP) plants based on parabolic troughs utilize auxiliary fuels (usually natural gas) to facilitate start-up operations, avoid freezing of HTF and increase power output. This practice has a significant effect on the environmental performance of the technology. The aim of this paper is to quantify the sustainability of CSP and to analyse how this is affected by hybridisation with different natural gas (NG) inputs. Methods A complete Life Cycle (LC) inventory was gathered for a commercial wet-cooled 50 MWe CSP plant based on parabolic troughs. A sensitivity analysis was conducted to evaluate the environmental performance of the plant operating with different NG inputs (between 0 and 35% of gross electricity generation). ReCiPe Europe (H) was used as LCA methodology. CML 2 baseline 2000 World and ReCiPe Europe E were used for comparative purposes. Cumulative Energy Demands (CED) and Energy Payback Times (EPT) were also determined for each scenario. Results and discussion Operation of CSP using solar energy only produced the following environmental profile: climate change 26.6 kg CO2 eq/KWh, human toxicity 13.1 kg 1,4-DB eq/KWh, marine ecotoxicity 276 g 1,4-DB eq/KWh, natural land transformation 0.005 m2/KWh, eutrophication 10.1 g P eq/KWh, acidification 166 g SO2 eq/KWh. Most of these impacts are associated with extraction of raw materials and manufacturing of plant components. The utilization NG transformed the environmental profile of the technology, placing increasing weight on impacts related to its operation and maintenance. Significantly higher impacts were observed on categories like climate change (311 kg CO2 eq/MWh when using 35 % NG), natural land transformation, terrestrial acidification and fossil depletion. Despite its fossil nature, the use of NG had a beneficial effect on other impact categories (human and marine toxicity, freshwater eutrophication and natural land transformation) due to the higher electricity output achieved. The overall environmental performance of CSP significantly deteriorated with the use of NG (single score 3.52 pt in solar only operation compared to 36.1 pt when using 35 % NG). Other sustainability parameters like EPT and CED also increased substantially as a result of higher NG inputs. Quasilinear second-degree polynomial relationships were calculated between various environmental performance parameters and NG contributions. Conclusions Energy input from auxiliary NG determines the environmental profile of the CSP plant. Aggregated analysis shows a deleterious effect on the overall environmental performance of the technology as a result of NG utilization. This is due primarily to higher impacts on environmental categories like climate change, natural land transformation, fossil fuel depletion and terrestrial acidification. NG may be used in a more sustainable and cost-effective manner in combined cycle power plants, which achieve higher energy conversion efficiencies.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.