952 resultados para Gegenbauer’s Polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Probabilistic graphical models are a huge research field in artificial intelligence nowadays. The scope of this work is the study of directed graphical models for the representation of discrete distributions. Two of the main research topics related to this area focus on performing inference over graphical models and on learning graphical models from data. Traditionally, the inference process and the learning process have been treated separately, but given that the learned models structure marks the inference complexity, this kind of strategies will sometimes produce very inefficient models. With the purpose of learning thinner models, in this master thesis we propose a new model for the representation of network polynomials, which we call polynomial trees. Polynomial trees are a complementary representation for Bayesian networks that allows an efficient evaluation of the inference complexity and provides a framework for exact inference. We also propose a set of methods for the incremental compilation of polynomial trees and an algorithm for learning polynomial trees from data using a greedy score+search method that includes the inference complexity as a penalization in the scoring function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study examined the effect of xylanase supplementation on apparent metabolizable energy (AME) and hepatic vitamin E and carotenoids in broiler chickens fed wheat based diets. A total of one hundred forty four male Ross 308 chickens were used in this study. Birds were randomly assigned to 3 dietary treatments (8 cages per treatment of 6 male broilers each) for 14 days from 7 to 21 day old. The control treatment was based on wheat-soyabean meal and was either unsupplemented or supplemented with either 1000 or 2000 xylanase units per kg diet. Orthogonal polynomial contrasts were used to test linear response to dietary xylanase activity. There was a positive linear relationship (P < 0.05) between dietary AME and doses of supplementary xylanase. A linear relationship (P < 0.05) was also observed between dosage of xylanase supplementation and hepatic vitamin E concentration and retention. In conclusion, xylanase supplementation improved dietary AME and increased hepatic vitamin E concentration which may have positive effects on the antioxidative status of the birds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present ground-penetrating radar (GPR)—based volume calculations, with associated error estimates, for eight glaciers on Wedel Jarlsberg Land, southwestern Spitsbergen, Svalbard, and compare them with those obtained from volume-area scaling relationships. The volume estimates are based upon GPR ice-thickness data collected during the period 2004–2013. The total area and volume of the ensemble are 502.91 ± 18.60 km2 and 91.91 ± 3.12 km3, respectively. The individual areas, volumes, and average ice thickness lie within 0.37–140.99 km2, 0.01–31.98 km3, and 28–227 m, respectively, with a maximum recorded ice thickness of 619 ± 13 m on Austre Torellbreen. To estimate the ice volume of unsurveyed tributary glaciers, we combine polynomial cross-sections with a function providing the best fit to the measured ice thickness along the center line of a collection of 22 surveyed tributaries. For the time-to-depth conversion of GPR data, we test the use of a glacierwide constant radio-wave velocity chosen on the basis of local or regional common midpoint measurements, versus the use of distinct velocities for the firn, cold ice, and temperate ice layers, concluding that the corresponding volume calculations agree with each other within their error bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Concentrating Solar Power (CSP) plants based on parabolic troughs utilize auxiliary fuels (usually natural gas) to facilitate start-up operations, avoid freezing of HTF and increase power output. This practice has a significant effect on the environmental performance of the technology. The aim of this paper is to quantify the sustainability of CSP and to analyse how this is affected by hybridisation with different natural gas (NG) inputs. Methods A complete Life Cycle (LC) inventory was gathered for a commercial wet-cooled 50 MWe CSP plant based on parabolic troughs. A sensitivity analysis was conducted to evaluate the environmental performance of the plant operating with different NG inputs (between 0 and 35% of gross electricity generation). ReCiPe Europe (H) was used as LCA methodology. CML 2 baseline 2000 World and ReCiPe Europe E were used for comparative purposes. Cumulative Energy Demands (CED) and Energy Payback Times (EPT) were also determined for each scenario. Results and discussion Operation of CSP using solar energy only produced the following environmental profile: climate change 26.6 kg CO2 eq/KWh, human toxicity 13.1 kg 1,4-DB eq/KWh, marine ecotoxicity 276 g 1,4-DB eq/KWh, natural land transformation 0.005 m2/KWh, eutrophication 10.1 g P eq/KWh, acidification 166 g SO2 eq/KWh. Most of these impacts are associated with extraction of raw materials and manufacturing of plant components. The utilization NG transformed the environmental profile of the technology, placing increasing weight on impacts related to its operation and maintenance. Significantly higher impacts were observed on categories like climate change (311 kg CO2 eq/MWh when using 35 % NG), natural land transformation, terrestrial acidification and fossil depletion. Despite its fossil nature, the use of NG had a beneficial effect on other impact categories (human and marine toxicity, freshwater eutrophication and natural land transformation) due to the higher electricity output achieved. The overall environmental performance of CSP significantly deteriorated with the use of NG (single score 3.52 pt in solar only operation compared to 36.1 pt when using 35 % NG). Other sustainability parameters like EPT and CED also increased substantially as a result of higher NG inputs. Quasilinear second-degree polynomial relationships were calculated between various environmental performance parameters and NG contributions. Conclusions Energy input from auxiliary NG determines the environmental profile of the CSP plant. Aggregated analysis shows a deleterious effect on the overall environmental performance of the technology as a result of NG utilization. This is due primarily to higher impacts on environmental categories like climate change, natural land transformation, fossil fuel depletion and terrestrial acidification. NG may be used in a more sustainable and cost-effective manner in combined cycle power plants, which achieve higher energy conversion efficiencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two experiments were conducted to estimate the standardized ileal digestible (SID) Trp:Lys ratio requirement for growth performance of nursery pigs. Experimental diets were formulated to ensure that lysine was the second limiting AA throughout the experiments. In Exp. 1 (6 to 10 kg BW), 255 nursery pigs (PIC 327 × 1050, initially 6.3 ± 0.15 kg, mean ± SD) arranged in pens of 6 or 7 pigs were blocked by pen weight and assigned to experimental diets (7 pens/diet) consisting of SID Trp:Lys ratios of 14.7%, 16.5%, 18.4%, 20.3%, 22.1%, and 24.0% for 14 d with 1.30% SID Lys. In Exp. 2 (11 to 20 kg BW), 1,088 pigs (PIC 337 × 1050, initially 11.2 kg ± 1.35 BW, mean ± SD) arranged in pens of 24 to 27 pigs were blocked by average pig weight and assigned to experimental diets (6 pens/diet) consisting of SID Trp:Lys ratios of 14.5%, 16.5%, 18.0%, 19.5%, 21.0%, 22.5%, and 24.5% for 21 d with 30% dried distillers grains with solubles and 0.97% SID Lys. Each experiment was analyzed using general linear mixed models with heterogeneous residual variances. Competing heteroskedastic models included broken-line linear (BLL), broken-line quadratic (BLQ), and quadratic polynomial (QP). For each response, the best-fitting model was selected using Bayesian information criterion. In Exp. 1 (6 to 10 kg BW), increasing SID Trp:Lys ratio linearly increased (P < 0.05) ADG and G:F. For ADG, the best-fitting model was a QP in which the maximum ADG was estimated at 23.9% (95% confidence interval [CI]: [<14.7%, >24.0%]) SID Trp:Lys ratio. For G:F, the best-fitting model was a BLL in which the maximum G:F was estimated at 20.4% (95% CI: [14.3%, 26.5%]) SID Trp:Lys. In Exp. 2 (11 to 20 kg BW), increasing SID Trp:Lys ratio increased (P < 0.05) ADG and G:F in a quadratic manner. For ADG, the best-fitting model was a QP in which the maximum ADG was estimated at 21.2% (95% CI: [20.5%, 21.9%]) SID Trp:Lys. For G:F, BLL and BLQ models had comparable fit and estimated SID Trp:Lys requirements at 16.6% (95% CI: [16.0%, 17.3%]) and 17.1% (95% CI: [16.6%, 17.7%]), respectively. In conclusion, the estimated SID Trp:Lys requirement in Exp. 1 ranged from 20.4% for maximum G:F to 23.9% for maximum ADG, whereas in Exp. 2 it ranged from 16.6% for maximum G:F to 21.2% for maximum ADG. These results suggest that standard NRC (2012) recommendations may underestimate the SID Trp:Lys requirement for nursery pigs from 11 to 20 kg BW.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las leguminosas grano presentan un perfil nutricional de gran interés para alimentación de ganado porcino, debido principalmente a su elevado contenido proteico. Sin embargo, la presencia de factores antinutritivos (FAN), que según el género difieren en calidad y cantidad, condiciona la absorción de la proteína, el nutriente más valorado. El objetivo de esta Tesis Doctoral ha sido el estudio del efecto de los principales FAN de guisante y alberjón sobre el rendimiento productivo, de canal y de piezas nobles, cuando sustituyen a la soja, parcial o totalmente, durante la fase estárter y el periodo de engorde de cerdos grasos. Con este motivo se llevaron a cabo 4 ensayos con machos castrados y la misma línea genética: híbrido Duroc x (Landrace x Large white). En el ensayo 1, se estudió la influencia de distintos niveles de inhibidores de proteasas (IP) en el pienso sobre la productividad de lechones durante la fase estárter (40 a 61 días de edad). Para ello, se utilizaron tres variedades de guisantes de invierno que contenían diferentes cantidades de IP, tanto de tripsina (IT) como de quimotripsina (IQ) [unidades de tripsina inhibida/mg (UTI), unidades de quimotripsina inhibida/mg (UQI): 9,87- 10,16, 5,75-8,62 y 12,55-15,75, para guisantes Cartouche, Iceberg y Luna, respectivamente] más elevadas que en la harina de soja 47 (HnaS) y en la soja extrusionada (SE) (UTI/mg - UQI/mg: 0,61-3,56 y 2,36-4,65, para HnaS y SE, respectivamente). El diseño experimental fue al azar, con cuatro tratamientos dietéticos que diferían en las fuentes proteicas y en la cantidad de IP, enfrentando un pienso control de soja a otros tres piensos con guisantes de invierno de las variedades indicadas, que sustituían parcialmente a la soja. Cada tratamiento se replicó cuatro veces, siendo la celda con 6 lechones la unidad experimental. Los animales que consumieron el pienso con guisante Cartouche tuvieron más ganancia media diaria (GMD) que el resto (P < 0,001) con el mismo consumo medio diario (CMD) e índice de conversión (IC). No hubo diferencias significativas entre los animales del pienso control y los que consumieron piensos con guisantes Iceberg y Luna. En el ensayo 2 la leguminosa objeto de estudio fue el alberjón y su FAN el dipéptido _Glutamyl-S-Ethenyl-Cysteine (GEC). El diseño y el periodo experimental fueron los mismos que en el ensayo 1, con cuatro dietas que variaban en el porcentaje de alberjones: 0%, 5%, 15% y 25%, y de GEC (1,54% del grano). Los lechones que consumieron el pienso con 5% tuvieron un CMD y GMD más elevado (P < 0,001), con el mismo IC que los animales pertenecientes al tratamiento 0%. Los índices productivos empeoraron significativamente y de manera progresiva al aumentar el porcentaje de alberjones (15 y 25%). Se obtuvieron ecuaciones de regresión con estructura polinomial que fueron significativas tanto para el nivel de alberjón como para la cantidad de GEC presente en el pienso. El ensayo 3 se efectuó durante el periodo de engorde, sustituyendo por completo la soja a partir de los 84 días de edad con las tres variedades de guisantes de invierno, observando el efecto sobre el rendimiento productivo, de canal y piezas nobles. El diseño, en bloques completos al azar, tuvo cuatro tratamientos según el guisante presente en el pienso y, por lo tanto, los niveles de IP: Control-soja, Cartouche, Iceberg y Luna, con 12 réplicas de 4 cerdos por tratamiento. De 84 a 108 días de edad los animales que consumieron los piensos Control-soja e Iceberg, tuvieron el mismo CMD y GMD, empeorando en los cerdos alimentados con Luna y Cartouche (P < 0,05). El IC fue igual en los tratamientos Control-soja e Iceberg, ocupando una posición intermedia en Cartouche y peor en los cerdos del pienso Luna (P < 0,001). De 109 a 127 días de edad la GMD y el IC fueron iguales, con un CMD más elevado en Control-soja e Iceberg que en los cerdos que consumieron Cartouche y Luna (P < 0,05). No hubo diferencias significativas durante el acabado (128 a 167 días de edad). Globalmente el CMD y GMD fueron más elevados en los cerdos que comieron los piensos Iceberg y Control-soja, empeorando por igual en los que comieron Cartouche y Luna (P < 0,05); el IC fue el mismo en todos los tratamientos. No se observaron diferencias en los datos relacionados con peso y rendimiento de canal y piezas nobles (jamón, paleta y chuletero), ni del contenido de grasa intramuscular en el lomo y proporción de ácidos grasos principales (C16:0, C18:0, C18:1n-9) en la grasa subcutánea. En el ensayo 4, realizado durante el periodo de engorde (60 a 171 días de edad), se valoró el efecto de dietas con distintos niveles de alberjones, y en consecuencia de su factor antinutritivo el dipéptido GEC, sobre el rendimiento productivo y la calidad de la canal y piezas nobles. El diseño fue en cuatro bloques completos al azar, con cuatro tratamientos según el porcentaje de inclusión de alberjón en el pienso: 0%, 5%, 15% y 25%, con 12 réplicas por tratamiento y cuatro cerdos en cada una de ellas. El tratamiento con 5% mejoró la GMD al final de la fase de cebo (152 días de vida) y, junto con el 0%, presentaron los resultados más favorables de peso e IC al final del ensayo (171 días de vida). Del mismo modo, el peso y rendimiento de canal fueron más elevados en los cerdos alimentados con los tratamientos 0% y 5% (P < 0,001). Piensos con el 15 y 25% de alberjones empeoraron los resultados productivos, así como el rendimiento y peso de canal. Sucedió lo mismo con el peso de las piezas nobles (jamón, paleta y chuletero), significativamente superior en 0% y 5% frente a 15% y 25%, siendo los cerdos que consumieron este último pienso los peores. Por el contrario el rendimiento de jamón y chuletero fue más elevado en los cerdos de los tratamientos 25% y 15% que en los que consumieron los piensos con 5% y 0% (P < 0,001); en el rendimiento de paletas se invirtieron los resultados, siendo mayores en los animales de los tratamientos 0% y 5% (P < 0,001). Se obtuvieron ecuaciones de regresión polinomial, para estimar las cantidades de inclusión de alberjones y de GEC más favorables desde el punto de vista productivo, así como los contrastes ortogonales entre los distintos tratamientos. ABSTRACT The grain legumes have a nutritional profile of great interest to feed pigs, mainly due to high protein content. However, the presence of antinutritional factors (ANF), which differ in quality and quantity according to gender, hinder the absorption of the protein, the most valuable nutrient. The aim of this thesis was to study the effect of the main ANF of pea and narbon vetch (NV) on productive performance, of the carcass and main lean cuts, when replacing soybean, partially or totally, during the starter phase and the fattening period of heavy pigs. For this reason were carried four trials with barrows and the same genetic line: Duroc hybrid x (Landrace x Large white). In trial 1, was studied the influence of different levels of protease inhibitors (PI) in the diet over productivity of piglets during the starter phase (40-61 days of age). For this, were used three varieties of winter peas containing different amounts of PI, both trypsin (TI) and chymotrypsin (CI) [inhibited units/mg trypsin (TIU), inhibited units/mg chymotrypsin (CIU): 9.87 - 10.16, 5.75 - 8.62 and 12.55 - 15.75, for peas Cartouche, Iceberg and Luna, respectively] higher than in soybean meal 47 (SBM) and soybeans extruded (SBE) (TIU/mg - CIU/mg: 0.61 - 3.56 and 2.36 - 4.65 for SBM and SBE, respectively). The design was randomized with four dietary treatments differing in protein sources and the amount of PI, with a control diet of soybean and three with different varieties of winter peas: Cartouche, Iceberg and Luna, which partially replace soybean. Each treatment was replicated four times, being the pen with 6 piglets the experimental unit. Pigs that ate the feed with pea Cartouche had better growth (ADG) than the rest (P < 0.001), with the same average daily feed intake (ADFI) and feed conversion ratio (FCR). There were no significant differences between piglets fed with control diet and those fed Iceberg and Luna diets. In trial 2 the legume under study was the NV and your ANF the dipeptide _Glutamyl FAN-S-Ethenyl-Cysteine (GEC). The experimental period and the design were the same as in trial 1, with four diets with different percentage of NV: 0%, 5%, 15% and 25%, and from GEC (1.52% of the grain). The piglets that consumed the feed containing 5% had higher ADG and ADFI (P < 0.05), with the same FCR that pigs belonging to the 0% treatment. Production rates worsened progressively with increasing percentage of NV (15 and 25%). Were obtained regression equations with polynomial structure that were significant for NV percentage and amount of GEC present in the feed. The test 3 was carried out during the fattening period, completely replace soy from 84 days of age with three varieties of winter peas, observing the effect on the yield, carcass and main lean cuts. The design, randomized complete blocks, had four treatments with different levels of PI: Control-soy, Cartouche, Iceberg and Luna, with 12 replicates of 4 pigs per treatment. From 84 to 108 days of age the pigs fed with Control-soy and Iceberg feed, had the same ADFI and ADG, worsening in pigs fed with Luna and Cartouche (P < 0.05). The FCR was similar in diets Control-soy and Iceberg, occupying an intermediate position in Cartouche and worse in pigs fed with Luna (P < 0.001). From 109-127 days of age the ADG and FCR were equal, with higher ADFI in pigs fed with Control-soy and Iceberg, regarding pigs fed with Cartouche and Luna (P < 0.05). There was no difference in the finishing phase (128-167 days of age). In global period, the ADFI and ADG were higher in pigs that ate Control-soy and Iceberg, and worse in those who ate Cartouche and Luna. The FCR was the same in all treatments. No significant differences were observed in the data related to weight and carcass yield, main lean cuts (ham, shoulder and loin chop) and intramuscular fat loin content and major fatty acids proportion (C16:0, C18:0, C18:1n-9) of subcutaneous fat. In experiment 4, made during the fattening period (60-171 days of age), was assessed the effect of diets with different levels of NV, and consequently of GEC, in the performance and quality of carcass and main lean cuts. There was a completely randomized design with four dietary treatments differing in percentage of NV: 0%, 5%, 15% and 25%, with 12 replicates per treatment and four pigs each. Treatment with 5% improved the ADG at the end of the fattening phase (152 days of age) and, together with 0%, showed the most favorable body weight and FCR at the end of the trial (171 days of age). Similarly, the weight and performance of carcass were higher for pigs fed with diets 0% and 5% (P < 0.05). Diets with 15 and 25% worsened the productive and carcass results. The weight of the main lean cuts (ham, shoulder and loin chop) was significantly higher in 0% and 5% vs 15% and 25%.The diet 25% was the worst of all. By contrast the performance of ham and loin chop was higher in pigs fed with diets 25% and 15%, that those who ate diets with 5% and 0% (P < 0.001); the results of shoulder performance were reversed, being greater in pigs feed with diets 0% and 5% (P < 0.001). Polynomial regression equations were obtained to estimate the percentage of NV and GEC more favorable from the point of view of production, and orthogonal contrasts between treatments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a ? -estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier?Stokes equations. It is shown that the two quasi- a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A numerical method to analyse the stability of transverse galloping based on experimental measurements, as an alternative method to polynomial fitting of the transverse force coefficient Cz, is proposed in this paper. The Glauert–Den Hartog criterion is used to determine the region of angles of attack (pitch angles) prone to present galloping. An analytic solution (based on a polynomial curve of Cz) is used to validate the method and to evaluate the discretization errors. Several bodies (of biconvex, D-shape and rhomboidal cross sections) have been tested in a wind tunnel and the stability of the galloping region has been analysed with the new method. An algorithm to determine the pitch angle of the body that allows the maximum value of the kinetic energy of the flow to be extracted is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Los resultados presentados en la memoria de esta tesis doctoral se enmarcan en la denominada computación celular con membranas una nueva rama de investigación dentro de la computación natural creada por Gh. Paun en 1998, de ahí que habitualmente reciba el nombre de sistemas P. Este nuevo modelo de cómputo distribuido está inspirado en la estructura y funcionamiento de la célula. El objetivo de esta tesis ha sido analizar el poder y la eficiencia computacional de estos sistemas de computación celular. En concreto, se han analizado dos tipos de sistemas P: por un lado los sistemas P de neuronas de impulsos, y por otro los sistemas P con proteínas en las membranas. Para el primer tipo, los resultados obtenidos demuestran que es posible que estos sistemas mantengan su universalidad aunque muchas de sus características se limiten o incluso se eliminen. Para el segundo tipo, se analiza la eficiencia computacional y se demuestra que son capaces de resolver problemas de la clase de complejidad ESPACIO-P (PSPACE) en tiempo polinómico. Análisis del poder computacional: Los sistemas P de neuronas de impulsos (en adelante SN P, acrónimo procedente del inglés «Spiking Neural P Systems») son sistemas inspirados en el funcionamiento neuronal y en la forma en la que los impulsos se propagan por las redes sinápticas. Los SN P bio-inpirados poseen un numeroso abanico de características que ha cen que dichos sistemas sean universales y por tanto equivalentes, en poder computacional, a una máquina de Turing. Estos sistemas son potentes a nivel computacional, pero tal y como se definen incorporan numerosas características, quizás demasiadas. En (Ibarra et al. 2007) se demostró que en estos sistemas sus funcionalidades podrían ser limitadas sin comprometer su universalidad. Los resultados presentados en esta memoria son continuistas con la línea de trabajo de (Ibarra et al. 2007) y aportan nuevas formas normales. Esto es, nuevas variantes simplificadas de los sistemas SN P con un conjunto mínimo de funcionalidades pero que mantienen su poder computacional universal. Análisis de la eficiencia computacional: En esta tesis se ha estudiado la eficiencia computacional de los denominados sistemas P con proteínas en las membranas. Se muestra que este modelo de cómputo es equivalente a las máquinas de acceso aleatorio paralelas (PRAM) o a las máquinas de Turing alterantes ya que se demuestra que un sistema P con proteínas, es capaz de resolver un problema ESPACIOP-Completo como el QSAT(problema de satisfacibilidad de fórmulas lógicas cuantificado) en tiempo polinómico. Esta variante de sistemas P con proteínas es muy eficiente gracias al poder de las proteínas a la hora de catalizar los procesos de comunicación intercelulares. ABSTRACT The results presented at this thesis belong to membrane computing a new research branch inside of Natural computing. This new branch was created by Gh. Paun on 1998, hence usually receives the name of P Systems. This new distributed computing model is inspired on structure and functioning of cell. The aim of this thesis is to analyze the efficiency and computational power of these computational cellular systems. Specifically there have been analyzed two different classes of P systems. On the one hand it has been analyzed the Neural Spiking P Systems, and on the other hand it has been analyzed the P systems with proteins on membranes. For the first class it is shown that it is possible to reduce or restrict the characteristics of these kind of systems without loss of computational power. For the second class it is analyzed the computational efficiency solving on polynomial time PSACE problems. Computational Power Analysis: The spiking neural P systems (SN P in short) are systems inspired by the way of neural cells operate sending spikes through the synaptic networks. The bio-inspired SN Ps possess a large range of features that make these systems to be universal and therefore equivalent in computational power to a Turing machine. Such systems are computationally powerful, but by definition they incorporate a lot of features, perhaps too much. In (Ibarra et al. in 2007) it was shown that their functionality may be limited without compromising its universality. The results presented herein continue the (Ibarra et al. 2007) line of work providing new formal forms. That is, new SN P simplified variants with a minimum set of functionalities but keeping the universal computational power. Computational Efficiency Analisys: In this thesis we study the computational efficiency of P systems with proteins on membranes. We show that this computational model is equivalent to parallel random access machine (PRAM) or alternating Turing machine because, we show P Systems with proteins can solve a PSPACE-Complete problem as QSAT (Quantified Propositional Satisfiability Problem) on polynomial time. This variant of P Systems with proteins is very efficient thanks to computational power of proteins to catalyze inter-cellular communication processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the Morton-Franks-Williams inequality for closures of simple braids (also known as positive permutation braids). This allows to prove, in a simple way, that the set of simple braids is an orthonormal basis for the inner product of the Hecke algebra of the braid group defined by Kálmán, who first obtained this result by using an interesting connection with Contact Topology. We also introduce a new technique to study the Homflypt polynomial for closures of positive braids, namely resolution trees whose leaves are simple braids. In terms of these simple resolution trees, we characterize closed positive braids for which the Morton-Franks-Williams inequality is strict. In particular, we determine explicitly the positive braid words on three strands whose closures have braid index three.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work a p-adaptation (modification of the polynomial order) strategy based on the minimization of the truncation error is developed for high order discontinuous Galerkin methods. The truncation error is approximated by means of a truncation error estimation procedure and enables the identification of mesh regions that require adaptation. Three truncation error estimation approaches are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. Fine solutions, which are obtained by enriching the polynomial order, are required to solve the numerical problem with adequate accuracy. For the three truncation error estimation methods the former needs time converged solutions, while the last two rely on non-converged solutions, which lead to faster computations. Based on these truncation error estimation methods, algorithms for mesh adaptation were designed and tested. Firstly, an isotropic adaptation approach is presented, which leads to equally distributed polynomial orders in different coordinate directions. This first implementation is improved by incorporating a method to extrapolate the truncation error. This results in a significant reduction of computational cost. Secondly, the employed high order method permits the spatial decoupling of the estimated errors and enables anisotropic p-adaptation. The incorporation of anisotropic features leads to meshes with different polynomial orders in the different coordinate directions such that flow-features related to the geometry are resolved in a better manner. These adaptations result in a significant reduction of degrees of freedom and computational cost, while the amount of improvement depends on the test-case. Finally, this anisotropic approach is extended by using error extrapolation which leads to an even higher reduction in computational cost. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. The main result is that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of a factor of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively. RESUMEN En este trabajo se ha desarrollado una estrategia de adaptación-p (modificación del orden polinómico) para métodos Galerkin discontinuo de alto orden basada en la minimización del error de truncación. El error de truncación se estima utilizando el método tau-estimation. El estimador permite la identificación de zonas de la malla que requieren adaptación. Se distinguen tres técnicas de estimación: a posteriori, quasi a priori y quasi a priori con correción. Todas las estrategias requieren una solución obtenida en una malla fina, la cual es obtenida aumentando de manera uniforme el orden polinómico. Sin embargo, mientras que el primero requiere que esta solución esté convergida temporalmente, el resto utiliza soluciones no convergidas, lo que se traduce en un menor coste computacional. En este trabajo se han diseñado y probado algoritmos de adaptación de malla basados en métodos tau-estimation. En primer lugar, se presenta un algoritmo de adaptacin isótropo, que conduce a discretizaciones con el mismo orden polinómico en todas las direcciones espaciales. Esta primera implementación se mejora incluyendo un método para extrapolar el error de truncación. Esto resulta en una reducción significativa del coste computacional. En segundo lugar, el método de alto orden permite el desacoplamiento espacial de los errores estimados, permitiendo la adaptación anisotropica. Las mallas obtenidas mediante esta técnica tienen distintos órdenes polinómicos en cada una de las direcciones espaciales. La malla final tiene una distribución óptima de órdenes polinómicos, los cuales guardan relación con las características del flujo que, a su vez, depenen de la geometría. Estas técnicas de adaptación reducen de manera significativa los grados de libertad y el coste computacional. Por último, esta aproximación anisotropica se extiende usando extrapolación del error de truncación, lo que conlleva un coste computational aún menor. Las estrategias se verifican y se comparan en téminors de precisión y coste computacional utilizando las ecuaciones de Euler y Navier Stokes. Los dos métodos quasi a priori consiguen una reducción significativa del coste computacional en comparación con aumento uniforme del orden polinómico. En concreto, para una capa límite viscosa, obtenemos una mejora en tiempo de computación de 6.6 y 7.6 respectivamente, para las aproximaciones quasi-a priori y quasi-a priori con corrección.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com a finalidade de determinar as formas do arco dentário inferior de maior incidência na oclusão normal natural, utilizou-se um método matemático associado ao emprego de uma função polinomial, o qual foi aplicado a 63 modelos de arcadas inferiores selecionados a partir de 6118 adolescentes. Todos os indivíduos eram portadores de dentição permanente, incluindo os segundos molares, e oclusão normal natural. Em cada dente foi fixada uma esfera de vidro, que teve a função de simular o acessório do aparelho ortodôntico, sendo utilizada na medição das distâncias entre o centro da imagem dessas esferas aos eixos x e y. Após a digitalização dos modelos de gesso, as imagens foram plotadas em um programa de computador, a fim de se obterem a função polinomial de sexto grau e o gráfico dessa função para os 126 segmentos de curva, originados das secções das imagens em lado direito e esquerdo. A seguir organizaram-se esses segmentos, de acordo com as características da curvatura anterior dos arcos dentários, em oito grupos diferentes de formas, que receberam as denominações de Forma A, Forma B, Forma C, Forma D, Forma E, Forma F, Forma G, Forma H. Cada grupo foi, então, dividido em três subgrupos, conforme os tamanhos pequeno, médio e grande. Os resultados indicaram 23 formas representativas do arco dentário inferior e uma forma média para a oclusão normal natural.