962 resultados para step-down method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Therapeutisches Drug Monitoring (TDM) umfasst die Messung von Medikamentenspiegeln im Blut und stellt die Ergebnisse in Zusammenhang mit dem klinischen Erscheinungsbild der Patienten. Dabei wird angenommen, dass die Konzentrationen im Blut besser mit der Wirkung korrelieren als die Dosis. Dies gilt auch für Antidepressiva. Voraussetzung für eine Therapiesteuerung durch TDM ist die Verfügbarkeit valider Messmethoden im Labor und die korrekte Anwendung des Verfahrens in der Klinik. Ziel dieser Arbeit war es, den Einsatz von TDM für die Depressionsbehandlung zu analysieren und zu verbessern. Im ersten Schritt wurde für das neu zugelassene Antidepressivum Duloxetin eine hochleistungsflüssig-chromatographische (HPLC) Methode mit Säulenschaltung und spektrophotometrischer Detektion etabliert und an Patienten für TDM angewandt. Durch Analyse von 280 Patientenproben wurde herausgefunden, dass Duloxetin-Konzentrationen von 60 bis 120 ng/ml mit gutem klinischen Ansprechen und einem geringen Risiko für Nebenwirkungen einhergingen. Bezüglich seines Interaktionspotentials erwies sich Duloxetin im Vergleich zu anderen Antidepressiva als schwacher Inhibitor des Cytochrom P450 (CYP) Isoenzyms 2D6. Es gab keinen Hinweis auf eine klinische Relevanz. Im zweiten Schritt sollte eine Methode entwickelt werden, mit der möglichst viele unterschiedliche Antidepressiva einschließlich deren Metaboliten messbar sind. Dazu wurde eine flüssigchromatographische Methode (HPLC) mit Ultraviolettspektroskopie (UV) entwickelt, mit der die quantitative Analyse von zehn antidepressiven und zusätzlich zwei antipsychotischen Substanzen innerhalb von 25 Minuten mit ausreichender Präzision und Richtigkeit (beide über 85%) und Sensitivität erlaubte. Durch Säulenschaltung war eine automatisierte Analyse von Blutplasma oder –serum möglich. Störende Matrixbestandteile konnten auf einer Vorsäule ohne vorherige Probenaufbereitung abgetrennt werden. Das kosten- und zeiteffektive Verfahren war eine deutliche Verbesserung für die Bewältigung von Proben im Laboralltag und damit für das TDM von Antidepressiva. Durch Analyse des klinischen Einsatzes von TDM wurden eine Reihe von Anwendungsfehlern identifiziert. Es wurde deshalb versucht, die klinische Anwendung des TDM von Antidepressiva durch die Umstellung von einer weitgehend händischen Dokumentation auf eine elektronische Bearbeitungsweise zu verbessern. Im Rahmen der Arbeit wurde untersucht, welchen Effekt man mit dieser Intervention erzielen konnte. Dazu wurde eine Labor-EDV eingeführt, mit der der Prozess vom Probeneingang bis zur Mitteilung der Messergebnisse auf die Stationen elektronisch erfolgte und die Anwendung von TDM vor und nach der Umstellung untersucht. Die Umstellung fand bei den behandelnden Ärzten gute Akzeptanz. Die Labor-EDV erlaubte eine kumulative Befundabfrage und eine Darstellung des Behandlungsverlaufs jedes einzelnen Patienten inklusive vorhergehender Klinikaufenthalte. Auf die Qualität der Anwendung von TDM hatte die Implementierung des Systems jedoch nur einen geringen Einfluss. Viele Anforderungen waren vor und nach der Einführung der EDV unverändert fehlerhaft, z.B. wurden häufig Messungen vor Erreichen des Steady State angefordert. Die Geschwindigkeit der Bearbeitung der Proben war im Vergleich zur vorher händischen Ausführung unverändert, ebenso die Qualität der Analysen bezüglich Richtigkeit und Präzision. Ausgesprochene Empfehlungen hinsichtlich der Dosierungsstrategie der angeforderten Substanzen wurden häufig nicht beachtet. Verkürzt wurde allerdings die mittlere Latenz, mit der eine Dosisanpassung nach Mitteilung des Laborbefundes erfolgte. Insgesamt ist es mit dieser Arbeit gelungen, einen Beitrag zur Verbesserung des Therapeutischen Drug Monitoring von Antidepressiva zu liefern. In der klinischen Anwendung sind allerdings Interventionen notwendig, um Anwendungsfehler beim TDM von Antidepressiva zu minimieren.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, our study group demonstrated the usefulness of ultrasonographic guidance in ilioinguinal/iliohypogastric nerve blocks in children. As a consequence, we designed a follow-up study to evaluate the optimal volume of local anesthetic for this regional anesthetic technique. Using a modified step-up-step-down approach, with 10 children in each study group, a starting dose of 0.2 mL/kg of 0.25% levobupivacaine was administered to perform an ilioinguinal/iliohypogastric nerve block under ultrasonographic guidance. After each group of 10 patients, the results were analyzed, and if all blocks were successful, the volume of local anesthetic was decreased by 50%, and a further 10 patients were enrolled into the study. Failure to achieve a 100% success rate within a group subjected patients to an automatic increase of half the previous volume reduction to be used in the subsequent group. Using 0.2 and 0.1 mL/kg of 0.25% levobupivacaine, the success rate was 100%. With a volume of 0.05 mL/kg of 0.25% levobupivacaine, 4 of 10 children received additional analgesia because of an inadequate block. Therefore, according to the protocol, the amount was increased to 0.075 mL/kg of 0.25% levobupivacaine, where the success rate was again 100%. We conclude that ultrasonographic guidance for ilioinguinal/iliohypogastric nerve blocks in children allowed a reduction of the volume of local anesthetic to 0.075 mL/kg.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To prospectively assess the depiction rate and morphologic features of myocardial bridging (MB) of coronary arteries with 64-section computed tomographic (CT) coronary angiography in comparison to conventional coronary angiography. MATERIALS AND METHODS: Patients were simultaneously enrolled in a prospective study comparing CT and conventional coronary angiography, for which ethics committee approval and informed consent were obtained. One hundred patients (38 women, 62 men; mean age, 63.8 years +/- 11.6 [standard deviation]) underwent 64-section CT and conventional coronary angiography. Fifty additional patients (19 women, 31 men; mean age, 59.2 years +/- 13.2) who underwent CT only were also included. CT images were analyzed for the direct signs length, depth, and degree of systolic compression, while conventional angiograms were analyzed for the indirect signs step down-step up phenomenon, milking effect, and systolic compression of the tunneled segment. Statistical analysis was performed with Pearson correlation analysis, the Wilcoxon two-sample test, and Fisher exact tests. RESULTS: MB was detected with CT in 26 (26%) of 100 patients and with conventional angiography in 12 patients (12%). Mean tunneled segment length and depth at CT (n = 150) were 24.3 mm +/- 10.0 and 2.6 mm +/- 0.8, respectively. Systolic compression in the 12 patients was 31.3% +/- 11.0 at CT and 28.2% +/- 10.5 at conventional angiography (r = 0.72, P < .001). With CT, a significant correlation was not found between systolic compression and length (r = 0.16, P = .25, n = 150) but was found with depth (r = 0.65, P < .01, n = 150) of the tunneled segment. In 14 patients in whom MB was found at CT but not at conventional angiography, length, depth, and systolic compression were significantly lower than in patients in whom both modalities depicted the anomaly (P < .001, P < .01, and P < .001, respectively). CONCLUSION: The depiction rate of MB is greater with 64-section CT coronary angiography than with conventional coronary angiography. The degree of systolic compression of MB significantly correlates with tunneled segment depth but not length.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The amount and type of ground cover is an important characteristic to measure when collecting soil disturbance monitoring data after a timber harvest. Estimates of ground cover and bare soil can be used for tracking changes in invasive species, plant growth and regeneration, woody debris loadings, and the risk of surface water runoff and soil erosion. A new method of assessing ground cover and soil disturbance was recently published by the U.S. Forest Service, the Forest Soil Disturbance Monitoring Protocol (FSDMP). This protocol uses the frequency of cover types in small circular (15cm) plots to compare ground surface in pre- and post-harvest condition. While both frequency and percent cover are common methods of describing vegetation, frequency has rarely been used to measure ground surface cover. In this study, three methods for assessing ground cover percent (step-point, 15cm dia. circular and 1x5m visual plot estimates) were compared to the FSDMP frequency method. Results show that the FSDMP method provides significantly higher estimates of ground surface condition for most soil cover types, except coarse wood. The three cover methods had similar estimates for most cover values. The FSDMP method also produced the highest value when bare soil estimates were used to model erosion risk. In a person-hour analysis, estimating ground cover percent in 15cm dia. plots required the least sampling time, and provided standard errors similar to the other cover estimates even at low sampling intensities (n=18). If ground cover estimates are desired in soil monitoring, then a small plot size (15cm dia. circle), or a step-point method can provide a more accurate estimate in less time than the current FSDMP method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Inverse fusion PCR cloning (IFPC) is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The mechanical behavior of the plate boundary fault zone is of paramount importance in subduction zones, because it controls megathrust earthquake nucleation and propagation as well as the structural style of the forearc. In the Nankai area along the NanTroSEIZE (Kumano) drilling transect offshore SW Japan, a heterogeneous sedimentary sequence overlying the oceanic crust enters the subduction zone. In order to predict how variations in lithology, and thus mechanical properties, affect the formation and evolution of the plate boundary fault, we conducted laboratory tests measuring the shear strengths of sediments approaching the trench covering each major lithological sedimentary unit. We observe that shear strength increases nonlinearly with depth, such that the (apparent) coefficient of friction decreases. In combination with a critical taper analysis, the results imply that the plate boundary position is located on the main frontal thrust. Further landward, the plate boundary is expected to step down into progressively lower stratigraphic units, assisted by moderately elevated pore pressures. As seismogenic depths are approached, the décollement may further step down to lower volcaniclastic or pelagic strata but this requires specific overpressure conditions. High-taper angle and elevated strengths in the toe region may be local features restricted to the Kumano transect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Submarine basalts are difficult to date accurately by the potassium-argon method. Dalrymple and Moore (1968) and Dymond (1970), for example, showed that, when the conventional K-Ar method is used, pillow lavas may contain excess 40Ar. Use of the 40Ar/39Ar step-heating method has not overcome the problem, as had been hoped, and has produced some conflicting results. Ozima and Saito (1973) concluded that the excess 40Ar is retained only in high temperature sites, but Seidemann (1978) found that it could be released at all temperatures. Furthermore, addition of potassium, from seawater, to the rock after it has solidified can result in low ages (Seidemann, 1977), the opposite effect to that of excess 40Ar. Thus, apparent ages may be either greater or less than the age of extrusion. Because of this discouraging record, the present study was approached pragmatically, to investigate whether self-consistent results can be obtained by the 40Ar/39Ar step-heating method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ya en el informe acerca del estado de la tecnología en la excavación profunda y en la construcción de túneles en terreno duro presentado en la 7ª Conferencia en Mecánica de Suelos e Ingeniería de la Cimentación, Peck (1969) introdujo los tres temas a ser tenidos en cuenta para el diseño de túneles en terrenos blandos: o Estabilidad de la cavidad durante la construcción, con particular atención a la estabilidad del frente del túnel; o Evaluación de los movimientos del terreno inducidos por la construcción del túnel y de la incidencia de los trabajos subterráneos a poca profundidad sobre los asentamientos en superficie; o Diseño del sistema de sostenimiento del túnel a instalar para asegurar la estabilidad de la estructura a corto y largo plazo. Esta Tesis se centra en los problemas señalados en el segundo de los puntos, analizando distintas soluciones habitualmente proyectadas para reducir los movimientos inducidos por la excavación de los túneles. El objeto de la Tesis es el análisis de la influencia de distintos diseños de paraguas de micropilotes, pantalla de micropilotes, paraguas de jet grouting y pantallas de jet grouting en los asientos en superficie durante la ejecución de túneles ejecutados a poca profundidad, con objeto de buscar el diseño que optimice los medios empleados para una determinada reducción de asientos. Para ello se establecen unas premisas para los proyectistas con objeto de conocer a priori cuales son los tratamientos más eficientes (de los propuestos en la Tesis) para la reducción de asientos en superficie cuando se ha de proyectar un túnel, de tal manera que pueda tener datos cualitativos y algunos cuantitativos sobre los diseños más óptimos, utilizando para ello un programa de elementos finitos de última generación que permite realizara la simulación tensodeformación del terreno mediante el modelo de suelo con endurecimiento (Hardening Soil Small model), que es una variante elastoplástica del modelo hiperbólico, similar al Hardening Soil Model. Además, este modelo incorpora una relación entre deformación y el modulo de rigidez, simulando el diferente comportamiento del suelo para pequeñas deformaciones (por ejemplo vibraciones con deformaciones por debajo de 10-5 y grandes deformaciones (deformaciones > 10-3). Para la realización de la Tesis se han elegido cinco secciones de túnel, dos correspondiente a secciones tipo de túnel ejecutado con tuneladora y tres secciones ejecutados mediante convencionales (dos correspondientes a secciones que han utilizado el método Belga y una que ha utilizado el NATM). Para conseguir los objetivos marcados, primeramente se ha analizado mediante una correlación entre modelos tridimensionales y bidimensionales el valor de relajación usado en estos últimos, y ver su variación al cambio de parámetros como la sección del túnel, la cobertera, el procedimiento constructivo, longitud de pase (métodos convencionales) o presión del frente (tuneladora) y las características geotécnicas de los materiales donde se ejecuta el túnel. Posteriormente se ha analizado que diseño de pantalla de protección tiene mejor eficacia respecto a la reducción de asientos, variando distintos parámetros de las características de la misma, como son el empotramiento, el tipo de micropilotes o pilote, la influencia del arriostramiento de las pantallas de protección en cabeza, la inclinación de la pantalla, la separación de la pantalla al eje del túnel y la disposición en doble fila de la pantalla de pantalla proyectada. Para finalizar el estudio de la efectividad de pantalla de protección para la reducción de asiento, se estudiará la influencia de la sobrecarga cercanas (simulación de edificios) tiene en la efectividad de la pantalla proyectada (desde el punto de vista de reducción de movimientos en superficie). Con objeto de poder comparar la efectividad de la pantalla de micropilotes respecto a la ejecución de un paraguas de micropilotes se ha analizado distintos diseños de paraguas, comparando el movimiento obtenido con el obtenido para el caso de pantalla de micropilotes, comparando ambos resultados con los medidos en obras ya ejecutadas. En otro apartado se ha realizado una comparación entre tratamientos similar, comparándolos en este caso con un paraguas de jet grouting y pantallas de jet grouting. Los resultados obtenidos se han con valores de asientos medidos en distintas obras ya ejecutadas y cuyas secciones se corresponden a los empleados en los modelos numéricos. Since the report on the state of technology in deep excavation and tunnelling in hard ground presented at the 7th Conference on Soil Mechanics and Foundation Engineering, Peck (1969) introduced the three issues to be taken into account for the design of tunnels in soft ground: o Cavity Stability during construction, with particular attention to the stability of the tunnel face; o Evaluation of ground movements induced by tunnelling and the effect of shallow underground workings on surface settlement; o Design of the tunnel support system to be installed to ensure short and long term stability of the structure. This thesis focuses on the issues identified in the second point, usually analysing different solutions designed to reduce the movements induced by tunnelling. The aim of the thesis is to analyse the influence of different micropile forepole umbrellas, micropile walls, jet grouting umbrellas and jet grouting wall designs on surface settlements during near surface tunnelling in order to use the most optimal technique to achieve a determined reduction in settlement. This will establish some criteria for designers to know a priori which methods are most effective (of those proposed in the thesis) to reduce surface settlements in tunnel design, so that it is possible to have qualitative and some quantitative data on the optimal designs, using the latest finite element modelling software that allows simulation of the ground’s infinitesimal strain behaviour using the Hardening Soil Small Model, which is a variation on the elasto-plastic hyperbolic model, similar to Hardening Soil model. In addition, this model incorporates a relationship between strain and the rigidity modulus, simulating different soil behaviour for small deformations (eg deformation vibrations below 10-5 and large deformations (deformations > 10-3). For the purpose of this thesis five tunnel sections have been chosen, two sections corresponding to TBM tunnels and three sections undertaken by conventional means (two sections corresponding to the Belgian method and one corresponding to the NATM). To achieve the objectives outlined, a correlation analysis of the relaxation values used in the 2D and 3D models was undertaken to verify them against parameters such as the tunnel cross-section, the depth of the tunnel, the construction method, the length of step (conventional method) or face pressure (TBM) and the geotechnical characteristics of the ground where the tunnel is constructed. Following this, the diaphragm wall design with the greatest efficiency regarding settlement reduction was analysed, varying parameters such as the toe depth, type of micropiles or piles, the influence of bracing of the head protection diaphragm walls, the inclination of the diaphragm wall, the separation between the diaphragm wall and the tunnel axis and the double diaphragm wall design arrangement. In order to complete the study into the effectiveness of protective diaphragm walls ofn the reduction of settlements, the influence of nearby imposed loads (simulating buildings) on the effectiveness of the designed diaphragm walls (from the point of view of reducing surface movements) will be studied. In order to compare the effectiveness of micropile diaphragm walls regarding the installation of micropile forepole umbrellas, different designs of these forepole umbrellas have been analysed comparing the movement obtained with that obtained for micropiled diaphragm walls, comparing both results with those measured from similar completed projects. In another section, a comparison between similar treatments has been completed, comparing the treatments with a forepole umbrella by jet grouting and jet grouting walls. The results obtained compared with settlement values measured in various projects already completed and whose sections correspond to those used in the numerical models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CaCu3Ti4O12 (CCTO) was prepared by a conventional synthesis (CS) and through reaction sintering, in which synthesis and sintering of the material take place in one single step. The microstructure and the dielectric properties of CCTO have been studied by XRD, FE-SEM, EDS, AFM, and impedance spectroscopy to correlate structure, microstructure, and electrical properties. Samples prepared by reactive sintering show very similar dielectric behavior to those prepared by CS. Therefore, it is possible to prepare CCTO by means of a single-step processing method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The correlations between chemical composition and coefficient of standardized ileal digestibility (CSID) of crude protein (CP) and amino acids (AA) were determined in 22 soybean meal (SBM) samples originated from USA (n = 8), Brazil (BRA; n = 7) and Argentina (ARG; n = 7) in 21-day old broilers. Birds were fed a commercial maize-SBM diet from 1 to 17 days of age followed by the experimental diets in which the SBM tested was the only source of protein (205 g CP/kg) for three days. Also, in vitro nitrogen (N) digestion study was conducted with these samples using the two-step enzymatic method. The coefficient of apparent ileal digestibility (CAID) of the SBM, independent of the origin, varied from 0.820 to 0.880 for CP, 0.850 to 0.905 for lysine (Lys), 0.859 to 0.907 for methionine (Met) and 0.664 to 0.750 for cysteine (Cys). The corresponding CSID values varied from 0.850 to 0.966 for CP, 0.891 to 0.940 for Lys, 0.931 to 0.970 for Met and 0.786 to 0.855 for Cys. The CSID of CP and Lys of the SBM were positively correlated with CP (r = 0.514; P menor que 0.05 and r = 0.370; P = 0.09, respectively), KOH solubility (KOH sol.) (r = 0.696; P menor que 0.001 and r = 0.619; P menor que 0.01, respectively), trypsin inhibitor activity (TIA) (r = 0.541; P menor que 0.01 and r = 0.416; P = 0.05, respectively) and reactive Lys (r = 0.563; P menor que 0.01 and r = 0.486; P menor que 0.05) values, but no relation was observed with neutral detergent fiber and oligosaccharide content. No relation between the CSID of CP determined in vivo and N digestibility determined in vitro was found. The CSID of most key AA were higher for the USA and the BRA meals than for the ARG meals. For Lys, the CSID was 0.921, 0.919 and 0.908 (P menor que 0.05) and for Cys 0.828, 0.833 and 0.800 (P menor que 0.01) for USA, BRA and ARG meals, respectively. It is concluded that under the conditions of this experiment, the CSID of CP and Lys increased with CP content, KOH sol., TIA and reactive Lys values of the SBM. The CSID of most limiting AA, including Lys and Cys, were higher for USA and BRA meals than for ARG meals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents solutions of the NURISP VVER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes with the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs.TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of GET or Black Box Homogenization type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Shading reduces the power output of a photovoltaic (PV) system. The design engineering of PV systems requires modeling and evaluating shading losses. Some PV systems are affected by complex shading scenes whose resulting PV energy losses are very difficult to evaluate with current modeling tools. Several specialized PV design and simulation software include the possibility to evaluate shading losses. They generally possess a Graphical User Interface (GUI) through which the user can draw a 3D shading scene, and then evaluate its corresponding PV energy losses. The complexity of the objects that these tools can handle is relatively limited. We have created a software solution, 3DPV, which allows evaluating the energy losses induced by complex 3D scenes on PV generators. The 3D objects can be imported from specialized 3D modeling software or from a 3D object library. The shadows cast by this 3D scene on the PV generator are then directly evaluated from the Graphics Processing Unit (GPU). Thanks to the recent development of GPUs for the video game industry, the shadows can be evaluated with a very high spatial resolution that reaches well beyond the PV cell level, in very short calculation times. A PV simulation model then translates the geometrical shading into PV energy output losses. 3DPV has been implemented using WebGL, which allows it to run directly from a Web browser, without requiring any local installation from the user. This also allows taken full benefits from the information already available from Internet, such as the 3D object libraries. This contribution describes, step by step, the method that allows 3DPV to evaluate the PV energy losses caused by complex shading. We then illustrate the results of this methodology to several application cases that are encountered in the world of PV systems design. Keywords: 3D, modeling, simulation, GPU, shading, losses, shadow mapping, solar, photovoltaic, PV, WebGL