430 resultados para Product Model
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Background: Previous work showed that daily ingestion of an aqueous soy extract fermented with Enterococcus faecium CRL 183 and Lactobacillus helveticus 416, supplemented or not with isoflavones, reduced the total cholesterol and non-HDL-cholesterol levels, increased the high-density lipoprotein (HDL) concentration and inhibited the raising of autoantibody against oxidized low-density lipoprotein (ox-LDL Ab) and the development of atherosclerotic lesions. Objective: The aim of this study was to characterize the fecal microbiota in order to investigate the possible correlation between fecal microbiota, serum lipid parameters and atherosclerotic lesion development in rabbits with induced hypercholesterolemia, that ingested the aqueous soy extract fermented with Enterococcus faecium CRL 183 and Lactobacillus helveticus 416. Methods: The rabbits were randomly allocated to five experimental groups (n = 6): control (C), hypercholesterolemic (H), hypercholesterolemic plus unfermented soy product (HUF), hypercholesterolemic plus fermented soy product (HF) and hypercholesterolemic plus isoflavone-supplemented fermented soy product (HIF). Lipid parameters and microbiota composition were analyzed on days 0 and 60 of the treatment and the atherosclerotic lesions were quantified at the end of the experiment. The fecal microbiota was characterized by enumerating the Lactobacillus spp., Bifidobacterium spp., Enterococcus spp., Enterobacteria and Clostridium spp. populations. Results: After 60 days of the experiment, intake of the probiotic soy product was correlated with significant increases (P < 0.05) on Lactobacillus spp., Bifidobacterium spp. and Enterococcus spp. and a decrease in the Enterobacteria population. A strong correlation was observed between microbiota composition and lipid profile. Populations of Enterococcus spp., Lactobacillus spp. and Bifidobacterium spp. were negatively correlated with total cholesterol, non-HDL-cholesterol, autoantibodies against oxidized LDL (ox-LDL Ab) and lesion size. HDL-C levels were positively correlated with Lactobacillus spp., Bifidobacterium spp., and Enterococcus spp. populations. Conclusion: In conclusion, daily ingestion of the probiotic soy product, supplemented or not with isoflavones, may contribute to a beneficial balance of the fecal microbiota and this modulation is associated with an improved cholesterol profile and inhibition of atherosclerotic lesion development.
Resumo:
Baccharis dracunculifolia DC (Asteraceae) is a Brazilian medicinal plant popularly used for its antiulcer and anti-inflammatory properties. This plant is the main botanical source of Brazilian green propolis, a natural product incorporated into food and beverages to improve health. The present study aimed to investigate the chemical profile and intestinal anti-inflammatory activity of B. dracunculifolia extract on experimental ulcerative colitis induced by trinitrobenzenosulfonic acid (TNBS). Colonic damage was evaluated macroscopically and biochemically through its evaluation of glutathione content and its myeloperoxidase (MPO) and alkaline phosphatase activities. Additional in vitro experiments were performed in order to test the antioxidant activity by inhibition of induced lipid peroxidation in the rat brain membrane. Phytochemical analysis was performed by HPLC using authentic standards. The administration of plant extract (5 and 50 mgkg(-1)) significantly attenuated the colonic damage induced by TNBS as evidenced both macroscopically and biochemically. This beneficial effect can be associated with an improvement in the colonic oxidative status, since plant extract prevented glutathione depletion, inhibited lipid peroxidation and reduced MPO activity. Caffeic acid, p-coumaric acid, aromadendrin-4-O-methyl ether, 3-prenyl-p-coumaric acid, 3,5-diprenyl-p-coumaric acid and baccharin were detected in the plant extract.
Resumo:
We describe an estimation technique for biomass burning emissions in South America based on a combination of remote-sensing fire products and field observations, the Brazilian Biomass Burning Emission Model (3BEM). For each fire pixel detected by remote sensing, the mass of the emitted tracer is calculated based on field observations of fire properties related to the type of vegetation burning. The burnt area is estimated from the instantaneous fire size retrieved by remote sensing, when available, or from statistical properties of the burn scars. The sources are then spatially and temporally distributed and assimilated daily by the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS) in order to perform the prognosis of related tracer concentrations. Three other biomass burning inventories, including GFEDv2 and EDGAR, are simultaneously used to compare the emission strength in terms of the resultant tracer distribution. We also assess the effect of using the daily time resolution of fire emissions by including runs with monthly-averaged emissions. We evaluate the performance of the model using the different emission estimation techniques by comparing the model results with direct measurements of carbon monoxide both near-surface and airborne, as well as remote sensing derived products. The model results obtained using the 3BEM methodology of estimation introduced in this paper show relatively good agreement with the direct measurements and MOPITT data product, suggesting the reliability of the model at local to regional scales.
Resumo:
We investigate the quantum integrability of the Landau-Lifshitz (LL) model and solve the long-standing problem of finding the local quantum Hamiltonian for the arbitrary n-particle sector. The particular difficulty of the LL model quantization, which arises due to the ill-defined operator product, is dealt with by simultaneously regularizing the operator product and constructing the self-adjoint extensions of a very particular structure. The diagonalizibility difficulties of the Hamiltonian of the LL model, due to the highly singular nature of the quantum-mechanical Hamiltonian, are also resolved in our method for the arbitrary n-particle sector. We explicitly demonstrate the consistency of our construction with the quantum inverse scattering method due to Sklyanin [Lett. Math. Phys. 15, 357 (1988)] and give a prescription to systematically construct the general solution, which explains and generalizes the puzzling results of Sklyanin for the particular two-particle sector case. Moreover, we demonstrate the S-matrix factorization and show that it is a consequence of the discontinuity conditions on the functions involved in the construction of the self-adjoint extensions.
Resumo:
We propose a model for D(+)->pi(+)pi(-)pi(+) decays following experimental results which indicate that the two-pion interaction in the S wave is dominated by the scalar resonances f(0)(600)/sigma and f(0)(980). The weak decay amplitude for D(+)-> R pi(+), where R is a resonance that subsequently decays into pi(+)pi(-), is constructed in a factorization approach. In the S wave, we implement the strong decay R ->pi(+)pi(-) by means of a scalar form factor. This provides a unitary description of the pion-pion interaction in the entire kinematically allowed mass range m(pi pi)(2) from threshold to about 3 GeV(2). In order to reproduce the experimental Dalitz plot for D(+)->pi(+)pi(-)pi(+), we include contributions beyond the S wave. For the P wave, dominated by the rho(770)(0), we use a Breit-Wigner description. Higher waves are accounted for by using the usual isobar prescription for the f(2)(1270) and rho(1450)(0). The major achievement is a good reproduction of the experimental m(pi pi)(2) distribution, and of the partial as well as the total D(+)->pi(+)pi(-)pi(+) branching ratios. Our values are generally smaller than the experimental ones. We discuss this shortcoming and, as a by-product, we predict a value for the poorly known D ->sigma transition form factor at q(2)=m pi(2).
Resumo:
We study the dynamics of the adoption of new products by agents with continuous opinions and discrete actions (CODA). The model is such that the refusal in adopting a new idea or product is increasingly weighted by neighbor agents as evidence against the product. Under these rules, we study the distribution of adoption times and the final proportion of adopters in the population. We compare the cases where initial adopters are clustered to the case where they are randomly scattered around the social network and investigate small world effects on the final proportion of adopters. The model predicts a fat tailed distribution for late adopters which is verified by empirical data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The productivity associated with commonly available disassembly methods today seldomly makes disassembly the preferred end-of-life solution for massive take back product streams. Systematic reuse of parts or components, or recycling of pure material fractions are often not achievable in an economically sustainable way. In this paper a case-based review of current disassembly practices is used to analyse the factors influencing disassembly feasibility. Data mining techniques were used to identify major factors influencing the profitability of disassembly operations. Case characteristics such as involvement of the product manufacturer in the end-of-life treatment and continuous ownership are some of the important dimensions. Economic models demonstrate that the efficiency of disassembly operations should be increased an order of magnitude to assure the competitiveness of ecologically preferred, disassembly oriented end-of-life scenarios for large waste of electric and electronic equipment (WEEE) streams. Technological means available to increase the productivity of the disassembly operations are summarized. Automated disassembly techniques can contribute to the robustness of the process, but do not allow to overcome the efficiency gap if not combined with appropriate product design measures. Innovative, reversible joints, collectively activated by external trigger signals, form a promising approach to low cost, mass disassembly in this context. A short overview of the state-of-the-art in the development of such self-disassembling joints is included. (c) 2008 CIRP.
Resumo:
Chloride attack in marine environments or in structures where deicing salts are used will not always show profiles with concentrations that decrease from the external surface to the interior of the concrete. Some profiles show an increase in chloride concentrations from when a peak is formed. This type of profile must be analyzed in a different way from the traditional model of Fick`s second law to generate more precise service life models. A model for forecasting the penetration of chloride ions as a function of time for profiles having formed a peak. To confirm the efficiency of this model, it is necessary to observe the behavior of a chloride profile with peak in a specific structure over a period of time. To achieve this, two chloride profiles with different ages (22 and 27 years) were extracted from the same structure. The profile obtained from the 22-year sample was used to estimate the chloride profile at 27 years using three models: a) the traditional model using Fick`s second law and extrapolating the value of C(S)-external surface chloride concentration; b) the traditional model using Fick`s second law and shifting the x-axis to the peak depth; c) the previously proposed model. The results from these models were compared with the actual profile measured in the 27-year sample and the results were analyzed. The model was presented with good precision for this study of case, requiring to be tested with other structures in use.
Resumo:
A large percentage of pile caps support only one column, and the pile caps in turn are supported by only a few piles. These are typically short and deep members with overall span-depth ratios of less than 1.5. Codes of practice do not provide uniform treatment for the design of these types of pile caps. These members have traditionally been designed as beams spanning between piles with the depth selected to avoid shear failures and the amount of longitudinal reinforcement selected to provide sufficient flexural capacity as calculated by the engineering beam theory. More recently, the strut-and-tie method has been used for the design of pile caps (disturbed or D-region) in which the load path is envisaged to be a three-dimensional truss, with compressive forces being supported by concrete compressive struts between the column and piles and tensile forces being carried by reinforcing steel located between piles. Both of these models have not provided uniform factors of safety against failure or been able to predict whether failure will occur by flexure (ductile mode) or shear (fragile mode). In this paper, an analytical model based on the strut-and-tie approach is presented. The proposed model has been calibrated using an extensive experimental database of pile caps subjected to compression and evaluated analytically for more complex loading conditions. It has been proven to be applicable across a broad range of test data and can predict the failures modes, cracking, yielding, and failure loads of four-pile caps with reasonable accuracy.
Resumo:
This article presents a tool for the allocation analysis of complex systems of water resources, called AcquaNetXL, developed in the form of spreadsheet in which a model of linear optimization and another nonlinear were incorporated. The AcquaNetXL keeps the concepts and attributes of a decision support system. In other words, it straightens out the communication between the user and the computer, facilitates the understanding and the formulation of the problem, the interpretation of the results and it also gives a support in the process of decision making, turning it into a clear and organized process. The performance of the algorithms used for solving the problems of water allocation was satisfactory especially for the linear model.
Resumo:
Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.
Resumo:
A multiphase deterministic mathematical model was implemented to predict the formation of the grain macrostructure during unidirectional solidification. The model consists of macroscopic equations of energy, mass, and species conservation coupled with dendritic growth models. A grain nucleation model based on a Gaussian distribution of nucleation undercoolings was also adopted. At some solidification conditions, the cooling curves calculated with the model showed oscillations (""wiggles""), which prevented the correct prediction of the average grain size along the structure. Numerous simulations were carried out at nucleation conditions where the oscillations are absent, enabling an assessment of the effect of the heat transfer coefficient on the average grain size and columnar-to-equiaxed transition.
Resumo:
This work deals with a procedure for model re-identification of a process in closed loop with ail already existing commercial MPC. The controller considered here has a two-layer structure where the upper layer performs a target calculation based on a simplified steady-state optimization of the process. Here, it is proposed a methodology where a test signal is introduced in a tuning parameter of the target calculation layer. When the outputs are controlled by zones instead of at fixed set points, the approach allows the continuous operation of the process without an excessive disruption of the operating objectives as process constraints and product specifications remain satisfied during the identification test. The application of the method is illustrated through the simulation of two processes of the oil refining industry. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Using a dynamic systems model specifically developed for Piracicaba, Capivari and Jundia River Water Basins (BH-PCJ) as a tool to help to analyze water resources management alternatives for policy makers and decision takers, five simulations for 50 years timeframe were performed. The model estimates water supply and demand, as well as wastewater generation from the consumers at BH-PCJ. A run was performed using mean precipitation value constant, and keeping the actual water supply and demand rates, the business as usual scenario. Under these considerations, it is expected an increment of about similar to 76% on water demand, that similar to 39% of available water volume will come from wastewater reuse, and that waste load increases to similar to 91%. Falkenmark Index will change from 1,403 m(3) person(-1) year(-1) in 2004, to 734 m(3) P(-1) year(-1) by 2054, and the Sustainability Index from 0.44 to 0.20. Another four simulations were performed by affecting the annual precipitation by 90 and 110%; considering an ecological flow equal to 30% of the mean daily flow; and keeping the same rates for all other factors except for ecological flow and household water consumption. All of them showed a tendency to a water crisis in the near future at BH-PCJ.