23 resultados para Productive specialization
em Universidad Politécnica de Madrid
Resumo:
The effects of inclusion of pea hulls (PH) in the diet on growth performance, development of the gastrointestinal tract and nutrient retention were studied in broilers from 1 to 18d of age. There were a control diet based on low fibre ingredients (69.3 total dietary fibre (16.1g crude fibre/kg)) and three additional diets that resulted from the dilution of the basal diet with 25, 50 and 75g PH/kg (81.2, 93.2, and 105.1g total dietary fibre/kg diet, respectively). Each treatment was replicated six times and the experimental unit was a cage with 12 chicks. Growth performance, development of the gastrointestinal tract and the coefficients of total tract apparent retention (CTTAR) of nutrients were recorded at 6, 12 and 18d of age. In addition, jejunal morphology was measured at 12 and 18d and the coefficients of apparent ileal digestibility (CAID) of nutrients at 18d of age. Pea hulls inclusion affected all the parameters studied. The inclusion of 25 and 50g PH/kg diet improved growth performance as compared to the control diet. The relative weight (g/kg body weight) of proventriculus (P≤0.01), gizzard (P≤0.001) and ceca (P≤0.05) increased linearly as the level of PH in the diet increased. The inclusion of PH affected quadratically (P≤0.01) villus height:crypt depth ratio with the highest value shown at 25g PH/kg. In general, the CTTAR and CAID of nutrients increased linearly and quadratically (P≤0.05) with increasing levels of PH, showing maximum values with PH level between 25 and 50g/kg diet. We conclude that the size of the digestive organs increases with increasing levels of PH in the diet. In general, the best performance and nutrient digestibility values were observed with levels of PH within the range of 25 and 50g/kg. Therefore, young broilers have a requirement for a minimum amount of dietary fibre. When pea hulls are used as a source of fibre, the level of total dietary fibre required for optimal performance is within the range of 81.2–93.2g/kg diet (25.6–35.0g crude fibre/kg diet). An excess of total dietary fibre (above 93.2g/kg diet) might reduce nutrient digestibility and growth performance to values similar to those observed with the control diet.
Resumo:
A total of 200 (Landrace3Large White dam3Pietrain3Large White sire) gilts of 5063 days of age (23.361.47 kg BW) were used to investigate the effects of castration (intact gilt, IG v. castrated gilt, CG) and slaughter weight (SW; 106 v. 122 kg BW) on productive performance, carcass and meat quality. Four treatments were arranged factorially and five replicates of 10 pigs each per treatment. Half of the gilts were ovariectomized at 58 days of age (8 days after the beginning of the trial at 29.861.64 kg BW), whereas the other half remained intact. The pigs were slaughtered at 106 or 122 kg BW. Meat samples were taken at Musculus longissimus thoracis at the level of the last rib and subcutaneous fat samples were taken at the tail insertion. For the entire experimental period, CG had higher ( P,0.05) BW gain and higher ( P,0.001) backfat and Musculus gluteus medius fat thickness than IG. However, IG had higher ( P,0.05) loin and trimmed primal cut yields than CG. Meat quality was similar for IG and CG but the proportion of linoleic acid in subcutaneous fat was higher ( P,0.001) for IG. Pigs slaughtered at 122 kg BW had higher ( P,0.001) feed intake and poorer feed efficiency than pigs slaughtered at 106 kg BW. An increase in SW improved ( P,0.001) carcass yield but decreased ( P,0.05) trimmed primal cut yield. Meat from pigs slaughtered at the heavier BW was redder (a*; P,0.001) and had more ( P,0.01) intramuscular fat and less thawing ( P,0.05) and cooking ( P,0.10) loss than meat from pigs slaughtered at the lighter BW. In addition, pigs slaughtered at 122 kg BW had less ( P,0.01) linoleic acid content in subcutaneous fat than pigs slaughtered at 106 kg BW. Castration of gilts and slaughtering at heavier BW are useful practices for the production of heavy pigs destined to the dry-cured industry in which a certain amount of fat in the carcass is required. In contrast, when the carcasses are destined to fresh meat production, IG slaughtered at 106 kg BW is a more efficient alternative.
Resumo:
A 12-wk experiment was conducted to investigate the effect of feeding program, dietary fiber, and CP content of the diet on productive performance of Ross broiler breeder hens (41 wk of age). There were 12 treatments arranged factorially with 2 levels of CP (14.5 vs. 17.4%), 3 fiber sources (0 vs. 3% inulin vs. 3% cellulose), and 2 levels of feed intake (160 vs. 208 g/d) that corresponded to restricted (R) or ad libitum (AL) feeding systems. The experimental diets contained 2,800 kcal ME with either 0.65 (14.5% CP) or 0.78% Lys (17.4% CP).
Resumo:
The influence of the main cereal and supplemental fat of the diet on productive performance and egg quality was studied in 756 brown-egg laying hens from 22 to 54 weeks of age. The experiment was conducted as a completely randomized design with 9 treatments arranged factorially with 3 cereals (dented corn, soft wheat, and barley) and 3 types of fat [soy oil (SBO), acidulated soapstocks (AOS), and lard].
Resumo:
In total of 504 Lohmann Brown hens were used to study the influence of the initial BW of the birds and the crude protein (CP) and fat content of the diet on performance and egg quality traits from 22 to 49 weeks of age. The experiment was completely randomized with 8 treatments arranged factorially with 2 initial BW (1,726 vs. 1,987g) and 4 diets with similar AMEn (2,750 kcal AMEn/ kg) and indispensable (lys, Met+Cys, Thr, and Trp) amino acid contents.
Resumo:
Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. We study the múltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While múltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of múltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract múltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups.
Resumo:
Polyvariant specialization allows generating múltiple versions of a procedure, which can then be separately optimized for different uses. Since allowing a high degree of polyvariance often results in more optimized code, polyvariant specializers, such as most partial evaluators, can genérate a large number of versions. This can produce unnecessarily large residual programs. Also, large programs can be slower due to cache miss effects. A possible solution to this problem is to introduce a minimization step which identifies sets of equivalent versions, and replace all occurrences of such versions by a single one. In this work we present a unifying view of the problem of superfluous polyvariance. It includes both partial deduction and abstract múltiple specialization. As regards partial deduction, we extend existing approaches in several ways. First, previous work has dealt with puré logic programs and a very limited class of builtins. Herein we propose an extensión to traditional characteristic trees which can be used in the presence of calis to external predicates. This includes all builtins, librarles, other user modules, etc. Second, we propose the possibility of collapsing versions which are not strictly equivalent. This allows trading time for space and can be useful in the context of embedded and pervasive systems. This is done by residualizing certain computations for external predicates which would otherwise be performed at specialization time. Third, we provide an experimental evaluation of the potential gains achievable using minimization which leads to interesting conclusions.
Resumo:
The relationship between abstract interpretation [2] and partial evaluation [5] has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction (see e.g. [6] and its references) and abstract interpretation perspectives. Abstract interpretation-based analyzers (such as the CiaoPP analyzer [9,4]) generally compute a program analysis graph [1] in order to propagate (abstract) call and success information by performing fixpoint computations when needed. On the other hand, partial deduction methods [7] incorporate powerful techniques for on-line specialization including (concrete) call propagation and unfolding.
Resumo:
Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. This paper reports on the application of abstract múltiple specialization to automatic program parallelization in the &-Prolog compiler. Abstract executability, the main concept underlying abstract specialization, is formalized, the design of the specialization system presented, and a non-trivial example of specialization in automatic parallelization is given.
Resumo:
We study the múltiple specialization of logic programs based on abstract interpretation. This involves in general generating several versions of a program predícate for different uses of such predícate, making use of information obtained from global analysis performed by an abstract interpreter, and finally producing a new, "multiply specialized" program. While the topic of múltiple specialization of logic programs has received considerable theoretical attention, it has never been actually incorporated in a compiler and its effects quantified. We perform such a study in the context of a parallelizing compiler and show that it is indeed a relevant technique in practice. Also, we propose an implementation technique which has the same power as the strongest of the previously proposed techniques but requires little or no modification of an existing abstract interpreter.
Resumo:
This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a ñrst pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simpliñes some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continué doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and conñrm the proposed simpliñcation. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not speciñcally designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application áreas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.
Resumo:
The aim of program specialization is to optimize programs by exploiting certain knowledge about the context in which the program will execute. There exist many program manipulation techniques which allow specializing the program in different ways. Among them, one of the best known techniques is partial evaluation, often referred to simply as program specialization, which optimizes programs by specializing them for (partially) known input data. In this work we describe abstract specialization, a technique whose main features are: (1) specialization is performed with respect to "abstract" valúes rather than "concrete" ones, and (2) abstract interpretation rather than standard interpretation of the program is used in order to propágate information about execution states. The concept of abstract specialization is at the heart of the specialization system in CiaoPP, the Ciao system preprocessor. In this paper we present a unifying view of the different specialization techniques used in CiaoPP and discuss their potential applications by means of examples. The applications discussed include program parallelization, optimization of dynamic scheduling (concurreney), and integration of partial evaluation techniques.
Resumo:
The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.
Resumo:
Separating programs into modules is a well-known technique which has proven very useful in program development and maintenance. Starting by introducing a number of possible scenarios, in this paper we study different issues which appear when developing analysis and specialization techniques for modular logic programming. We discuss a number of design alternatives and their consequences for the different scenarios considered and describe where applicable the decisions made in the Ciao system analyzer and specializer. In our discussion we use the module system of Ciao Prolog. This is both for concreteness and because Ciao Prolog is a second-generation Prolog system which has been designed with global analysis and specialization in mind, and which has a strict module system. The aim of this work is not to provide a theoretical basis on modular analysis and specialization, but rather to discuss some interesting practical issues.
Resumo:
Abstract is not available