876 resultados para split-step Fourier method
Resumo:
Pós-graduação em Genética e Melhoramento Animal - FCAV
Resumo:
The purpose of this article is to present a method which consists in the development of unit cell numerical models for smart composite materials with piezoelectric fibers made of PZT embedded in a non-piezoelectric matrix (epoxy resin). This method evaluates a globally homogeneous medium equivalent to the original composite, using a representative volume element (RVE). The suitable boundary conditions allow the simulation of all modes of the overall deformation arising from any arbitrary combination of mechanical and electrical loading. In the first instance, the unit cell is applied to predict the effective material coefficients of the transversely isotropic piezoelectric composite with circular cross section fibers. The numerical results are compared to other methods reported in the literature and also to results previously published, in order to evaluate the method proposal. In the second step, the method is applied to calculate the equivalent properties for smart composite materials with square cross section fibers. Results of comparison between different combinations of circular and square fiber geometries, observing the influence of the boundary conditions and arrangements are presented.
Resumo:
The purpose of this article is to present a method which consists in the development of unit cell numerical models for smart composite materials with piezoelectric fibers made of PZT embedded in a non-piezoelectric matrix (epoxy resin). This method evaluates a globally homogeneous medium equivalent to the original composite, using a representative volume element (RVE). The suitable boundary conditions allow the simulation of all modes of the overall deformation arising from any arbitrary combination of mechanical and electrical loading. In the first instance, the unit cell is applied to predict the effective material coefficients of the transversely isotropic piezoelectric composite with circular cross section fibers. The numerical results are compared to other methods reported in the literature and also to results previously published, in order to evaluate the method proposal. In the second step, the method is applied to calculate the equivalent properties for smart composite materials with square cross section fibers. Results of comparison between different combinations of circular and square fiber geometries, observing the influence of the boundary conditions and arrangements are presented.
Resumo:
Bei einer Risikoabschätzung bezüglich einer Gefährdung des Schutzgutes Grundwasser müssen alle relevanten Transportpfade, auf denen Schadstoffe durch die Bodenzone bis ins Grundwasser verlagert werden, identifiziert und quantifiziert werden. Die Verlagerung von Schadstoffen gebunden an mobile Partikel im Sickerwasser wird dabei oft vernachlässigt. In dieser Arbeit wurden sowohl experimentelle Untersuchungen zum Partikeltransport in der Bodenzone als auch Szenarienmodellierungen hinsichtlich der Wechselwirkung Partikel/Schadstoff durchgeführt. Die experimentellen ungesättigten Säulenversuche wurden unter naturnahen stationären und instationären hydraulischen und hydrochemischen Bedingungen durchgeführt. Dabei wurde der Einfluss der Parameter Durchmesser Bodenmatrix, Partikelgröße, Beregnungsintensität, Oberflächenspannung und Hydrochemie auf den Transport von natürlichen und synthetischen Partikeln untersucht. Des Weiteren wurden Untersuchungen zur partikelgebundenen Verlagerung von Phenanthren durchgeführt. In einer numerischen Szenarienmodellierung mit dem Modell SMART wurde untersucht, unter welchen Randbedingungen der Transport von Partikeln gleichzeitig zu signifikanten partikelgebundenen Schadstoffkonzentrationen im Grundwasser führt. Dabei wurden die Parameter Lithologie Partikel/Boden, Hydrophobizität Schadstoff, Partikelkonzentration, Partikeldurchmesser sowie Körnung Bodenmatrix variiert. Die Ergebnisse dieser Arbeit zeigen, dass der partikelgebundene Schadstofftransportpfad in der ungesättigten Bodenzone in verschiedenen Szenarien den Anteil mobiler Schadstoffe, die mit dem Sickerwasser ins Grundwasser gelangen, signifikant erhöht. Auf Basis der experimentellen und theoretischen Untersuchungen wurde ein zweistufiges Bewertungsschema entwickelt, das bereits im Vorfeld einer Risikoabschätzung als Entscheidungshilfe hinsichtlich der Relevanz einer Mobilisierung, eines Transports und des Rückhalts von partikelgebundenen Schadstoffen in der ungesättigten Zone dient.
Resumo:
The research interest of this study is to investigate surface immobilization strategies for proteins and other biomolecules by the surface plasmon field-enhanced fluorescence spectroscopy (SPFS) technique. The recrystallization features of the S-layer proteins and the possibility of combining the S-layer lattice arrays with other functional molecules make this protein a prime candidate for supramolecular architectures. The recrystallization behavior on gold or on the secondary cell wall polymer (SCWP) was recorded by SPR. The optical thicknesses and surface densities for different protein layers were calculated. In DNA hybridization tests performed in order to discriminate different mismatches, recombinant S-layer-streptavidin fusion protein matrices showed their potential for new microarrays. Moreover, SCWPs coated gold chips, covered with a controlled and oriented assembly of S-layer fusion proteins, represent an even more sensitive fluorescence testing platform. Additionally, S-layer fusion proteins as the matrix for LHCII immobilization strongly demonstrate superiority over routine approaches, proving the possibility of utilizing them as a new strategy for biomolecular coupling. In the study of the SPFS hCG immunoassay, the biophysical and immunological characteristics of this glycoprotein hormone were presented first. After the investigation of the effect of the biotin thiol dilution on the coupling efficiently, the interfacial binding model including the appropriate binary SAM structure and the versatile streptavidin-biotin interaction was chosen as the basic supramolecular architecture for the fabrication of a SPFS-based immunoassay. Next, the affinity characteristics between different antibodies and hCG were measured via an equilibrium binding analysis, which is the first example for the titration of such a high affinity interaction by SPFS. The results agree very well with the constants derived from the literature. Finally, a sandwich assay and a competitive assay were selected as templates for SPFS-based hCG detection, and an excellent LOD of 0.15 mIU/ml was attained via the “one step” sandwich method. Such high sensitivity not only fulfills clinical requirements, but is also better than most other biosensors. Fully understanding how LHCII complexes transfer the sunlight energy directionally and efficiently to the reaction center is potentially useful for constructing biomimetic devices as solar cells. After the introduction of the structural and the spectroscopic features of LHCII, different surface immobilization strategies of LHCII were summarized next. Among them the strategy based on the His-tag and the immobilized metal (ion) affinity chromatography (IMAC) technique were of great interest and resulted in different kinds of home-fabricated His-tag chelating chips. Their substantial protein coupling capacity, maintenance of high biological activity and a remarkably repeatable binding ability on the same chip after regeneration was demonstrated. Moreover, different parameters related to the stability of surface coupled reconstituted complexes, including sucrose, detergent, lipid, oligomerization, temperature and circulation rate, were evaluated in order to standardize the most effective immobilization conditions. In addition, partial lipid bilayers obtained from LHCII contained proteo-liposomes fusion on the surface were observed by the QCM technique. Finally, the inter-complex energy transfer between neighboring LHCIIs on a gold protected silver surface by excitation with a blue laser (λ = 473nm) was recorded for the first time, and the factors influencing the energy transfer efficiency were evaluated.
Resumo:
Therapeutisches Drug Monitoring (TDM) umfasst die Messung von Medikamentenspiegeln im Blut und stellt die Ergebnisse in Zusammenhang mit dem klinischen Erscheinungsbild der Patienten. Dabei wird angenommen, dass die Konzentrationen im Blut besser mit der Wirkung korrelieren als die Dosis. Dies gilt auch für Antidepressiva. Voraussetzung für eine Therapiesteuerung durch TDM ist die Verfügbarkeit valider Messmethoden im Labor und die korrekte Anwendung des Verfahrens in der Klinik. Ziel dieser Arbeit war es, den Einsatz von TDM für die Depressionsbehandlung zu analysieren und zu verbessern. Im ersten Schritt wurde für das neu zugelassene Antidepressivum Duloxetin eine hochleistungsflüssig-chromatographische (HPLC) Methode mit Säulenschaltung und spektrophotometrischer Detektion etabliert und an Patienten für TDM angewandt. Durch Analyse von 280 Patientenproben wurde herausgefunden, dass Duloxetin-Konzentrationen von 60 bis 120 ng/ml mit gutem klinischen Ansprechen und einem geringen Risiko für Nebenwirkungen einhergingen. Bezüglich seines Interaktionspotentials erwies sich Duloxetin im Vergleich zu anderen Antidepressiva als schwacher Inhibitor des Cytochrom P450 (CYP) Isoenzyms 2D6. Es gab keinen Hinweis auf eine klinische Relevanz. Im zweiten Schritt sollte eine Methode entwickelt werden, mit der möglichst viele unterschiedliche Antidepressiva einschließlich deren Metaboliten messbar sind. Dazu wurde eine flüssigchromatographische Methode (HPLC) mit Ultraviolettspektroskopie (UV) entwickelt, mit der die quantitative Analyse von zehn antidepressiven und zusätzlich zwei antipsychotischen Substanzen innerhalb von 25 Minuten mit ausreichender Präzision und Richtigkeit (beide über 85%) und Sensitivität erlaubte. Durch Säulenschaltung war eine automatisierte Analyse von Blutplasma oder –serum möglich. Störende Matrixbestandteile konnten auf einer Vorsäule ohne vorherige Probenaufbereitung abgetrennt werden. Das kosten- und zeiteffektive Verfahren war eine deutliche Verbesserung für die Bewältigung von Proben im Laboralltag und damit für das TDM von Antidepressiva. Durch Analyse des klinischen Einsatzes von TDM wurden eine Reihe von Anwendungsfehlern identifiziert. Es wurde deshalb versucht, die klinische Anwendung des TDM von Antidepressiva durch die Umstellung von einer weitgehend händischen Dokumentation auf eine elektronische Bearbeitungsweise zu verbessern. Im Rahmen der Arbeit wurde untersucht, welchen Effekt man mit dieser Intervention erzielen konnte. Dazu wurde eine Labor-EDV eingeführt, mit der der Prozess vom Probeneingang bis zur Mitteilung der Messergebnisse auf die Stationen elektronisch erfolgte und die Anwendung von TDM vor und nach der Umstellung untersucht. Die Umstellung fand bei den behandelnden Ärzten gute Akzeptanz. Die Labor-EDV erlaubte eine kumulative Befundabfrage und eine Darstellung des Behandlungsverlaufs jedes einzelnen Patienten inklusive vorhergehender Klinikaufenthalte. Auf die Qualität der Anwendung von TDM hatte die Implementierung des Systems jedoch nur einen geringen Einfluss. Viele Anforderungen waren vor und nach der Einführung der EDV unverändert fehlerhaft, z.B. wurden häufig Messungen vor Erreichen des Steady State angefordert. Die Geschwindigkeit der Bearbeitung der Proben war im Vergleich zur vorher händischen Ausführung unverändert, ebenso die Qualität der Analysen bezüglich Richtigkeit und Präzision. Ausgesprochene Empfehlungen hinsichtlich der Dosierungsstrategie der angeforderten Substanzen wurden häufig nicht beachtet. Verkürzt wurde allerdings die mittlere Latenz, mit der eine Dosisanpassung nach Mitteilung des Laborbefundes erfolgte. Insgesamt ist es mit dieser Arbeit gelungen, einen Beitrag zur Verbesserung des Therapeutischen Drug Monitoring von Antidepressiva zu liefern. In der klinischen Anwendung sind allerdings Interventionen notwendig, um Anwendungsfehler beim TDM von Antidepressiva zu minimieren.
Resumo:
We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.
Resumo:
The amount and type of ground cover is an important characteristic to measure when collecting soil disturbance monitoring data after a timber harvest. Estimates of ground cover and bare soil can be used for tracking changes in invasive species, plant growth and regeneration, woody debris loadings, and the risk of surface water runoff and soil erosion. A new method of assessing ground cover and soil disturbance was recently published by the U.S. Forest Service, the Forest Soil Disturbance Monitoring Protocol (FSDMP). This protocol uses the frequency of cover types in small circular (15cm) plots to compare ground surface in pre- and post-harvest condition. While both frequency and percent cover are common methods of describing vegetation, frequency has rarely been used to measure ground surface cover. In this study, three methods for assessing ground cover percent (step-point, 15cm dia. circular and 1x5m visual plot estimates) were compared to the FSDMP frequency method. Results show that the FSDMP method provides significantly higher estimates of ground surface condition for most soil cover types, except coarse wood. The three cover methods had similar estimates for most cover values. The FSDMP method also produced the highest value when bare soil estimates were used to model erosion risk. In a person-hour analysis, estimating ground cover percent in 15cm dia. plots required the least sampling time, and provided standard errors similar to the other cover estimates even at low sampling intensities (n=18). If ground cover estimates are desired in soil monitoring, then a small plot size (15cm dia. circle), or a step-point method can provide a more accurate estimate in less time than the current FSDMP method.
Resumo:
Inverse fusion PCR cloning (IFPC) is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.
Resumo:
A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.
Resumo:
Submarine basalts are difficult to date accurately by the potassium-argon method. Dalrymple and Moore (1968) and Dymond (1970), for example, showed that, when the conventional K-Ar method is used, pillow lavas may contain excess 40Ar. Use of the 40Ar/39Ar step-heating method has not overcome the problem, as had been hoped, and has produced some conflicting results. Ozima and Saito (1973) concluded that the excess 40Ar is retained only in high temperature sites, but Seidemann (1978) found that it could be released at all temperatures. Furthermore, addition of potassium, from seawater, to the rock after it has solidified can result in low ages (Seidemann, 1977), the opposite effect to that of excess 40Ar. Thus, apparent ages may be either greater or less than the age of extrusion. Because of this discouraging record, the present study was approached pragmatically, to investigate whether self-consistent results can be obtained by the 40Ar/39Ar step-heating method.
Resumo:
Ya en el informe acerca del estado de la tecnología en la excavación profunda y en la construcción de túneles en terreno duro presentado en la 7ª Conferencia en Mecánica de Suelos e Ingeniería de la Cimentación, Peck (1969) introdujo los tres temas a ser tenidos en cuenta para el diseño de túneles en terrenos blandos: o Estabilidad de la cavidad durante la construcción, con particular atención a la estabilidad del frente del túnel; o Evaluación de los movimientos del terreno inducidos por la construcción del túnel y de la incidencia de los trabajos subterráneos a poca profundidad sobre los asentamientos en superficie; o Diseño del sistema de sostenimiento del túnel a instalar para asegurar la estabilidad de la estructura a corto y largo plazo. Esta Tesis se centra en los problemas señalados en el segundo de los puntos, analizando distintas soluciones habitualmente proyectadas para reducir los movimientos inducidos por la excavación de los túneles. El objeto de la Tesis es el análisis de la influencia de distintos diseños de paraguas de micropilotes, pantalla de micropilotes, paraguas de jet grouting y pantallas de jet grouting en los asientos en superficie durante la ejecución de túneles ejecutados a poca profundidad, con objeto de buscar el diseño que optimice los medios empleados para una determinada reducción de asientos. Para ello se establecen unas premisas para los proyectistas con objeto de conocer a priori cuales son los tratamientos más eficientes (de los propuestos en la Tesis) para la reducción de asientos en superficie cuando se ha de proyectar un túnel, de tal manera que pueda tener datos cualitativos y algunos cuantitativos sobre los diseños más óptimos, utilizando para ello un programa de elementos finitos de última generación que permite realizara la simulación tensodeformación del terreno mediante el modelo de suelo con endurecimiento (Hardening Soil Small model), que es una variante elastoplástica del modelo hiperbólico, similar al Hardening Soil Model. Además, este modelo incorpora una relación entre deformación y el modulo de rigidez, simulando el diferente comportamiento del suelo para pequeñas deformaciones (por ejemplo vibraciones con deformaciones por debajo de 10-5 y grandes deformaciones (deformaciones > 10-3). Para la realización de la Tesis se han elegido cinco secciones de túnel, dos correspondiente a secciones tipo de túnel ejecutado con tuneladora y tres secciones ejecutados mediante convencionales (dos correspondientes a secciones que han utilizado el método Belga y una que ha utilizado el NATM). Para conseguir los objetivos marcados, primeramente se ha analizado mediante una correlación entre modelos tridimensionales y bidimensionales el valor de relajación usado en estos últimos, y ver su variación al cambio de parámetros como la sección del túnel, la cobertera, el procedimiento constructivo, longitud de pase (métodos convencionales) o presión del frente (tuneladora) y las características geotécnicas de los materiales donde se ejecuta el túnel. Posteriormente se ha analizado que diseño de pantalla de protección tiene mejor eficacia respecto a la reducción de asientos, variando distintos parámetros de las características de la misma, como son el empotramiento, el tipo de micropilotes o pilote, la influencia del arriostramiento de las pantallas de protección en cabeza, la inclinación de la pantalla, la separación de la pantalla al eje del túnel y la disposición en doble fila de la pantalla de pantalla proyectada. Para finalizar el estudio de la efectividad de pantalla de protección para la reducción de asiento, se estudiará la influencia de la sobrecarga cercanas (simulación de edificios) tiene en la efectividad de la pantalla proyectada (desde el punto de vista de reducción de movimientos en superficie). Con objeto de poder comparar la efectividad de la pantalla de micropilotes respecto a la ejecución de un paraguas de micropilotes se ha analizado distintos diseños de paraguas, comparando el movimiento obtenido con el obtenido para el caso de pantalla de micropilotes, comparando ambos resultados con los medidos en obras ya ejecutadas. En otro apartado se ha realizado una comparación entre tratamientos similar, comparándolos en este caso con un paraguas de jet grouting y pantallas de jet grouting. Los resultados obtenidos se han con valores de asientos medidos en distintas obras ya ejecutadas y cuyas secciones se corresponden a los empleados en los modelos numéricos. Since the report on the state of technology in deep excavation and tunnelling in hard ground presented at the 7th Conference on Soil Mechanics and Foundation Engineering, Peck (1969) introduced the three issues to be taken into account for the design of tunnels in soft ground: o Cavity Stability during construction, with particular attention to the stability of the tunnel face; o Evaluation of ground movements induced by tunnelling and the effect of shallow underground workings on surface settlement; o Design of the tunnel support system to be installed to ensure short and long term stability of the structure. This thesis focuses on the issues identified in the second point, usually analysing different solutions designed to reduce the movements induced by tunnelling. The aim of the thesis is to analyse the influence of different micropile forepole umbrellas, micropile walls, jet grouting umbrellas and jet grouting wall designs on surface settlements during near surface tunnelling in order to use the most optimal technique to achieve a determined reduction in settlement. This will establish some criteria for designers to know a priori which methods are most effective (of those proposed in the thesis) to reduce surface settlements in tunnel design, so that it is possible to have qualitative and some quantitative data on the optimal designs, using the latest finite element modelling software that allows simulation of the ground’s infinitesimal strain behaviour using the Hardening Soil Small Model, which is a variation on the elasto-plastic hyperbolic model, similar to Hardening Soil model. In addition, this model incorporates a relationship between strain and the rigidity modulus, simulating different soil behaviour for small deformations (eg deformation vibrations below 10-5 and large deformations (deformations > 10-3). For the purpose of this thesis five tunnel sections have been chosen, two sections corresponding to TBM tunnels and three sections undertaken by conventional means (two sections corresponding to the Belgian method and one corresponding to the NATM). To achieve the objectives outlined, a correlation analysis of the relaxation values used in the 2D and 3D models was undertaken to verify them against parameters such as the tunnel cross-section, the depth of the tunnel, the construction method, the length of step (conventional method) or face pressure (TBM) and the geotechnical characteristics of the ground where the tunnel is constructed. Following this, the diaphragm wall design with the greatest efficiency regarding settlement reduction was analysed, varying parameters such as the toe depth, type of micropiles or piles, the influence of bracing of the head protection diaphragm walls, the inclination of the diaphragm wall, the separation between the diaphragm wall and the tunnel axis and the double diaphragm wall design arrangement. In order to complete the study into the effectiveness of protective diaphragm walls ofn the reduction of settlements, the influence of nearby imposed loads (simulating buildings) on the effectiveness of the designed diaphragm walls (from the point of view of reducing surface movements) will be studied. In order to compare the effectiveness of micropile diaphragm walls regarding the installation of micropile forepole umbrellas, different designs of these forepole umbrellas have been analysed comparing the movement obtained with that obtained for micropiled diaphragm walls, comparing both results with those measured from similar completed projects. In another section, a comparison between similar treatments has been completed, comparing the treatments with a forepole umbrella by jet grouting and jet grouting walls. The results obtained compared with settlement values measured in various projects already completed and whose sections correspond to those used in the numerical models.
Resumo:
CaCu3Ti4O12 (CCTO) was prepared by a conventional synthesis (CS) and through reaction sintering, in which synthesis and sintering of the material take place in one single step. The microstructure and the dielectric properties of CCTO have been studied by XRD, FE-SEM, EDS, AFM, and impedance spectroscopy to correlate structure, microstructure, and electrical properties. Samples prepared by reactive sintering show very similar dielectric behavior to those prepared by CS. Therefore, it is possible to prepare CCTO by means of a single-step processing method.
Resumo:
The correlations between chemical composition and coefficient of standardized ileal digestibility (CSID) of crude protein (CP) and amino acids (AA) were determined in 22 soybean meal (SBM) samples originated from USA (n = 8), Brazil (BRA; n = 7) and Argentina (ARG; n = 7) in 21-day old broilers. Birds were fed a commercial maize-SBM diet from 1 to 17 days of age followed by the experimental diets in which the SBM tested was the only source of protein (205 g CP/kg) for three days. Also, in vitro nitrogen (N) digestion study was conducted with these samples using the two-step enzymatic method. The coefficient of apparent ileal digestibility (CAID) of the SBM, independent of the origin, varied from 0.820 to 0.880 for CP, 0.850 to 0.905 for lysine (Lys), 0.859 to 0.907 for methionine (Met) and 0.664 to 0.750 for cysteine (Cys). The corresponding CSID values varied from 0.850 to 0.966 for CP, 0.891 to 0.940 for Lys, 0.931 to 0.970 for Met and 0.786 to 0.855 for Cys. The CSID of CP and Lys of the SBM were positively correlated with CP (r = 0.514; P menor que 0.05 and r = 0.370; P = 0.09, respectively), KOH solubility (KOH sol.) (r = 0.696; P menor que 0.001 and r = 0.619; P menor que 0.01, respectively), trypsin inhibitor activity (TIA) (r = 0.541; P menor que 0.01 and r = 0.416; P = 0.05, respectively) and reactive Lys (r = 0.563; P menor que 0.01 and r = 0.486; P menor que 0.05) values, but no relation was observed with neutral detergent fiber and oligosaccharide content. No relation between the CSID of CP determined in vivo and N digestibility determined in vitro was found. The CSID of most key AA were higher for the USA and the BRA meals than for the ARG meals. For Lys, the CSID was 0.921, 0.919 and 0.908 (P menor que 0.05) and for Cys 0.828, 0.833 and 0.800 (P menor que 0.01) for USA, BRA and ARG meals, respectively. It is concluded that under the conditions of this experiment, the CSID of CP and Lys increased with CP content, KOH sol., TIA and reactive Lys values of the SBM. The CSID of most limiting AA, including Lys and Cys, were higher for USA and BRA meals than for ARG meals.
Resumo:
This paper presents solutions of the NURISP VVER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes with the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs.TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of GET or Black Box Homogenization type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors.