958 resultados para MODEL TESTS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heparin is the most frequently used drug for the prevention and treatment of thrombosis. Its use, however, is restricted by its side-effects. To study the efficacy of other glycosaminoglycans that could substitute heparin in the management of arterial thrombosis, 60 guinea-pigs were randomly allocated into 6 groups: G1= control, G2= heparin (150 IU/kg), G3= heparan sulfate from beef pancreas (2.5 mg/kg), G4= heparan sulfate from beef lung (2.5 mg/kg), G5= N-acetylated heparan from beef pancreas, G6= dermatan sulfate from beef intestine (2.5 mg/kg). Ten minutes after intravenous injection of the drugs, thrombosis was induced by the injection of a 50% glucose solution into a segment of the right carotid artery isolated between 2 thread loops during 10 minutes. Three hours later the artery was re-exposed and if a thrombus was present it was measured, withdrawn and weighed. Thrombin time and activated partial thromboplastin time were measured in all animals. Thrombus developed in 90% of the animals in the control group, 0% in G2 and G3, 62.5% in G4, 87.5% in G5 and G6. Only in the animals treated with heparin the coagulation tests were prolonged. In conclusion, in the used dose only the heparan sulfate from beef pancreas presented an antithrombotic effect similar to heparin in this experimental model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a constructive heuristic algorithm (CHA) for solving directly the long-term transmission-network-expansion-planning (LTTNEP) problem using the DC model. The LTTNEP is a very complex mixed-integer nonlinear-programming problem and presents a combinatorial growth in the search space. The CHA is used to find a solution for the LTTNEP problem of good quality. A sensitivity index is used in each step of the CHA to add circuits to the system. This sensitivity index is obtained by solving the relaxed problem of LTTNEP, i.e. considering the number of circuits to be added as a continuous variable. The relaxed problem is a large and complex nonlinear-programming problem and was solved through the interior-point method (IPM). Tests were performed using Garver's system, the modified IEEE 24-Bus system and the Southern Brazilian reduced system. The results presented show the good performance of IPM inside the CHA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim To evaluate the reactivity of different endodontic materials and sealers with glucose and to asses the reliability of the glucose leakage model in measuring penetration of glucose through these materials.Methodology Ten uniform discs (radius 5 mm, thickness 2 mm) were made of each of the following materials: Portland cement, MTA (grey and white), sealer 26, calcium sulphate, calcium hydroxide [Ca(OH)(2)], AH26,Epiphany, Resilon, gutta-percha and dentine. After storing the discs for 1 week at 37 degrees C and humid conditions, they were immersed in 0.2 mg mL(-1) glucose solution in a test tube. The concentration of glucose was evaluated using an enzymatic reaction after 1 week. Statistical analysis was performed with the ANOVA and Dunnett tests at a significant level of P < 0.05.Results Portland cement, MTA, Ca(OH)(2) and sealer 26 reduced the concentration in the test tube of glucose significantly after 1 week (P < 0.05). Calcium sulphate reduced the concentration of glucose, but the difference in concentrations was not significant (P = 0.054).Conclusions Portland cement, MTA, Ca(OH)(2) and sealer 26 react with a 0.2 mg mL(-1) glucose solution. Therefore, these materials should not be evaluated for sealing ability with the glucose leakage model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulations of overshooting, tropical deep convection using a Cloud Resolving Model with bulk microphysics are presented in order to examine the effect on the water content of the TTL (Tropical Tropopause Layer) and lower stratosphere. This case study is a subproject of the HIBISCUS (Impact of tropical convection on the upper troposphere and lower stratosphere at global scale) campaign, which took place in Bauru, Brazil (22° S, 49° W), from the end of January to early March 2004. Comparisons between 2-D and 3-D simulations suggest that the use of 3-D dynamics is vital in order to capture the mixing between the overshoot and the stratospheric air, which caused evaporation of ice and resulted in an overall moistening of the lower stratosphere. In contrast, a dehydrating effect was predicted by the 2-D simulation due to the extra time, allowed by the lack of mixing, for the ice transported to the region to precipitate out of the overshoot air. Three different strengths of convection are simulated in 3-D by applying successively lower heating rates (used to initiate the convection) in the boundary layer. Moistening is produced in all cases, indicating that convective vigour is not a factor in whether moistening or dehydration is produced by clouds that penetrate the tropopause, since the weakest case only just did so. An estimate of the moistening effect of these clouds on an air parcel traversing a convective region is made based on the domain mean simulated moistening and the frequency of convective events observed by the IPMet (Instituto de Pesquisas Meteorológicas, Universidade Estadual Paulista) radar (S-band type at 2.8 Ghz) to have the same 10 dBZ echo top height as those simulated. These suggest a fairly significant mean moistening of 0.26, 0.13 and 0.05 ppmv in the strongest, medium and weakest cases, respectively, for heights between 16 and 17 km. Since the cold point and WMO (World Meteorological Organization) tropopause in this region lies at ∼ 15.9 km, this is likely to represent direct stratospheric moistening. Much more moistening is predicted for the 15-16 km height range with increases of 0.85-2.8 ppmv predicted. However, it would be required that this air is lofted through the tropopause via the Brewer Dobson circulation in order for it to have a stratospheric effect. Whether this is likely is uncertain and, in addition, the dehydration of air as it passes through the cold trap and the number of times that trajectories sample convective regions needs to be taken into account to gauge the overall stratospheric effect. Nevertheless, the results suggest a potentially significant role for convection in determining the stratospheric water content. Sensitivity tests exploring the impact of increased aerosol numbers in the boundary layer suggest that a corresponding rise in cloud droplet numbers at cloud base would increase the number concentrations of the ice crystals transported to the TTL, which had the effect of reducing the fall speeds of the ice and causing a ∼13% rise in the mean vapour increase in both the 15-16 and 16-17 km height ranges, respectively, when compared to the control case. Increases in the total water were much larger, being 34% and 132% higher for the same height ranges, but it is unclear whether the extra ice will be able to evaporate before precipitating from the region. These results suggest a possible impact of natural and anthropogenic aerosols on how convective clouds affect stratospheric moisture levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an algorithm to solve the network transmission system expansion planning problem using the DC model which is a mixed non-linear integer programming problem. The major feature of this work is the use of a Branch-and-Bound (B&B) algorithm to directly solve mixed non-linear integer problems. An efficient interior point method is used to solve the non-linear programming problem at each node of the B&B tree. Tests with several known systems are presented to illustrate the performance of the proposed method. ©2007 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reveals new contributions to the analysis and development of mitigating harmonic distortion devices. Considering the variety of sequential distribution of harmonic current, in the use of passive filters, one can point out the electromagnetic blocking device, which have received particular attention due to its robustness and low cost of installation. In this context, aiming the evaluation of the reliability of the results obtained through mathematical modeling, experimental tests are carried out using a low-power prototype, highlighting particular aspects related to its function as a zero-sequence harmonic blocking. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The aim of this study was to verify whether there is an association between anaerobic running capacity (ARC) values, estimated from two-parameter models, and maximal accumulated oxygen deficit (MAOD) in army runners. Methods: Eleven, trained, middle distance runners who are members of the armed forces were recruited for the study (20 ± 1 years). They performed a critical velocity test (CV) for ARC estimation using three mathematical models and an MAOD test, both tests were applied on a motorized treadmill. Results: The MAOD was 61.6 ± 5.2 mL/kg (4.1 ± 0.3 L). The ARC values were 240.4 ± 18.6 m from the linear velocity-inverse time model, 254.0 ± 13.0 m from the linear distance-time model, and 275.2 ± 9.1 m from the hyperbolic time-velocity relationship (nonlinear 2-parameter model), whereas critical velocity values were 3.91 ± 0.07 m/s, 3.86 ± 0.08 m/s and 3.80 ± 0.09 m/s, respectively. There were differences (P < 0.05) for both the ARC and the CV values when compared between velocity-inverse time linear and nonlinear 2-parameter mathematical models. The different values of ARC did not significantly correlate with MAOD. Conclusion: In conclusion, estimated ARC did not correlate with MAOD, and should not be considered as an anaerobic measure of capacity for treadmill running. © 2013 Elsevier Masson SAS. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article shows a transmission line model developed directly in the phase domain. The proposed model is based on the relationships between the phase currents and voltages at both the sending and receiving ends of a single-phase line. These relationships, established using an ABCD matrix, were extended to multi-phase lines. The proposed model was validated by using it to represent a transmission line during short-and open-circuit tests. The results obtained with the proposed model were compared with results obtained with a classical model based on modal decomposition. These comparisons show that proposed model was correctly developed. © 2013 Taylor and Francis Group, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a novel full analytic model for vibration analysis of solid-state electronic components. The model is just as accurate as finite element models and numerically light enough to permit for quick design trade-offs and statistical analysis. The paper shows the development of the model, comparison to finite elements and an application to a common engineering problem. A gull-wing flat pack component was selected as the benchmark test case, although the presented methodology is applicable to a wide range of component packages. Results showed very good agreement between the presented method and finite elements and demonstrated the usefulness of the method in how to use standard test data for a general application. © 2013 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many models for unsaturated soil have been developed in the last years, accompanying the development of experimental techniques to deal with such soils. The benchmark of the models for unsaturated soil can be assigned to the Barcelona Basic Model (BBM) now incorporated in some codes such as the CODE_BRIGHT. Most of those models were validated considering limited laboratory test results and not much validation is available considering real field problems. This paper presents modeling results of field plate load tests performed under known suction on a lateritic unsaturated soil. The required input data were taken from laboratory tests performed under suction control. The modeling nicely reproduces field tests allowing appreciating the influence of soil suction on the stress-settlement curve. In addition, wetting induced or collapse settlements were calculated from field tests and were nicely duplicated by the numerical analysis performed.