885 resultados para combined stage sintering model
Resumo:
BACKGROUND: Drugs are routinely combined in anesthesia and pain management to obtain an enhancement of the desired effects. However, a parallel enhancement of the undesired effects might take place as well, resulting in a limited therapeutic usefulness. Therefore, when addressing the question of optimal drug combinations, side effects must be taken into account. METHODS: By extension of a previously published interaction model, the authors propose a method to study drug interactions considering also their side effects. A general outcome parameter identified as patient's well-being is defined by superposition of positive and negative effects. Well-being response surfaces are computed and analyzed for varying drugs pharmacodynamics and interaction types. In particular, the existence of multiple maxima and of optimal drug combinations is investigated for the combination of two drugs. RESULTS: Both drug pharmacodynamics and interaction type affect the well-being surface and the deriving optimal combinations. The effect of the interaction parameters can be explained in terms of synergy and antagonism and remains unchanged for varying pharmacodynamics. For all simulations performed for the combination of two drugs, the presence of more than one maximum was never observed. CONCLUSIONS: The model is consistent with clinical knowledge and supports previously published experimental results on optimal drug combinations. This new framework improves understanding of the characteristics of drug combinations used in clinical practice and can be used in clinical research to identify optimal drug dosing.
Resumo:
In Malani and Neilsen (1992) we have proposed alternative estimates of survival function (for time to disease) using a simple marker that describes time to some intermediate stage in a disease process. In this paper we derive the asymptotic variance of one such proposed estimator using two different methods and compare terms of order 1/n when there is no censoring. In the absence of censoring the asymptotic variance obtained using the Greenwood type approach converges to exact variance up to terms involving 1/n. But the asymptotic variance obtained using the theory of the counting process and results from Voelkel and Crowley (1984) on semi-Markov processes has a different term of order 1/n. It is not clear to us at this point why the variance formulae using the latter approach give different results.
Resumo:
Stem cell regeneration of damaged tissue has recently been reported in many different organs. Since the loss of retinal pigment epithelium (RPE) in the eye is associated with a major cause of visual loss - specifically, age-related macular degeneration - we investigated whether hematopoietic stem cells (HSC) given systemically can home to the damaged subretinal space and express markers of RPE lineage. Green fluorescent protein (GFP) cells of bone marrow origin were used in a sodium iodate (NaIO(3)) model of RPE damage in the mouse. The optimal time for adoptive transfer of bone marrow-derived stem cells relative to the time of injury and the optimal cell type [whole bone marrow, mobilized peripheral blood, HSC, facilitating cells (FC)] were determined by counting the number of GFP(+) cells in whole eye flat mounts. Immunocytochemistry was performed to identify the bone marrow origin of the cells in the RPE using antibodies for CD45, Sca-1, and c-kit, as well as the expression of the RPE-specific marker, RPE-65. The time at which bone marrow-derived cells were adoptively transferred relative to the time of NaIO(3) injection did not significantly influence the number of cells that homed to the subretinal space. At both one and two weeks after intravenous (i.v.) injection, GFP(+) cells of bone marrow origin were observed in the damaged subretinal space, at sites of RPE loss, but not in the normal subretinal space. The combined transplantation of HSC+FC cells appeared to favor the survival of the homed stem cells at two weeks, and RPE-65 was expressed by adoptively transferred HSC by four weeks. We have shown that systemically injected HSC homed to the subretinal space in the presence of RPE damage and that FC promoted survival of these cells. Furthermore, the RPE-specific marker RPE-65 was expressed on adoptively transferred HSC in the denuded areas.
Resumo:
Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.
Resumo:
Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.
Resumo:
OBJECTIVE: To investigate the effects of tyrosine-kinase inhibitors of vascular endothelial growth factor (VECF) and platelet-derived growth factor (PDCF)-receptors on non-malignant tissue and whether they depend upon the stage of vascular maturation. MATERIALS AND METHODS: PTK787/ZK222584 and CGP53716 (VEGF- and PDGF-receptor inhibitor respectively), both alone and combined, were applied on chicken chorioallantoic membrane (CAM). RESULTS: On embryonic day of CAM development (E)8, only immature microvessels, which lack coverage of pericytes, are present: whereas the microvessels on E12 have pericytic coverage. This development was reflected in the expression levels of pericytic markers (alpha-smooth muscle actin, PDGF-receptor beta and desmin), which were found by immunoblotting to progressively increase between E8 and E12. Monotherapy with 2 microg of PTK787/ZK222584 induced significant vasodegeneration on E8, but not on E12. Monotherapy with CGP53716 affected only pericytes. When CGP53716 was applied prior to treatment with 2 microg of PTK787/ZK222584, vasodegeneration occurred also on E12. The combined treatment increased the apoptotic rate. as evidenced by the cDNA levels of caspase-9 and the TUNEL-assay. CONCLUSION: Anti-angiogenic treatment strategies for non-neoplastic disorders should aim to interfere with the maturation stage of the target vessels: monotherapy with VEGF-receptor inhibitor for immature vessels, and combined anti-angiogenic treatment for well developed mature vasculature.
Resumo:
The vitamin D(3) and nicotine (VDN) model is a model of isolated systolic hypertension (ISH) due to arterial calcification raising arterial stiffness and vascular impedance similar to an aged and stiffened arterial tree. We therefore analyzed the impact of this aging model on normal and diseased hearts with myocardial infarction (MI). Wistar rats were treated with VDN (n = 9), subjected to MI by coronary ligation (n = 10), or subjected to a combination of both MI and VDN treatment (VDN/MI, n = 14). A sham-treated group served as control (Ctrl, n = 10). Transthoracic echocardiography was performed every 2 wk, whereas invasive indexes were obtained at week 8 before death. Calcium, collagen, and protein contents were measured in the heart and the aorta. Systolic blood pressure, pulse pressure, thoracic aortic calcium, and end-systolic elastance as an index of myocardial contractility were highest in the aging model group compared with MI and Ctrl groups (P(VDN) < 0.05, 2-way ANOVA). Left ventricular wall stress and brain natriuretic peptide (P(VDNxMI) = not significant) were highest, while ejection fraction, stroke volume, and cardiac output were lowest in the combined group versus all other groups (P(VDNxMI) < 0.05). The combination of ISH due to this aging model and MI demonstrates significant alterations in cardiac function. This model mimics several clinical phenomena of cardiovascular aging and may thus serve to further study novel therapies.
Resumo:
A major barrier to widespread clinical implementation of Monte Carlo dose calculation is the difficulty in characterizing the radiation source within a generalized source model. This work aims to develop a generalized three-component source model (target, primary collimator, flattening filter) for 6- and 18-MV photon beams that match full phase-space data (PSD). Subsource by subsource comparison of dose distributions, using either source PSD or the source model as input, allows accurate source characterization and has the potential to ease the commissioning procedure, since it is possible to obtain information about which subsource needs to be tuned. This source model is unique in that, compared to previous source models, it retains additional correlations among PS variables, which improves accuracy at nonstandard source-to-surface distances (SSDs). In our study, three-dimensional (3D) dose calculations were performed for SSDs ranging from 50 to 200 cm and for field sizes from 1 x 1 to 30 x 30 cm2 as well as a 10 x 10 cm2 field 5 cm off axis in each direction. The 3D dose distributions, using either full PSD or the source model as input, were compared in terms of dose-difference and distance-to-agreement. With this model, over 99% of the voxels agreed within +/-1% or 1 mm for the target, within 2% or 2 mm for the primary collimator, and within +/-2.5% or 2 mm for the flattening filter in all cases studied. For the dose distributions, 99% of the dose voxels agreed within 1% or 1 mm when the combined source model-including a charged particle source and the full PSD as input-was used. The accurate and general characterization of each photon source and knowledge of the subsource dose distributions should facilitate source model commissioning procedures by allowing scaling the histogram distributions representing the subsources to be tuned.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
A 'two coat' model of the life cycle of Trypanosoma brucei has prevailed for more than 15 years. Metacyclic forms transmitted by infected tsetse flies and mammalian bloodstream forms are covered by variant surface glycoproteins. All other life cycle stages were believed to have a procyclin coat, until it was shown recently that epimastigote forms in tsetse salivary glands express procyclin mRNAs without translating them. As epimastigote forms cannot be cultured, a procedure was devised to compare the transcriptomes of parasites in different fly tissues. Transcripts encoding a family of glycosylphosphatidyl inositol-anchored proteins, BARPs (previously called bloodstream alanine-rich proteins), were 20-fold more abundant in salivary gland than midgut (procyclic) trypanosomes. Anti-BARP antisera reacted strongly and exclusively with salivary gland parasites and a BARP 3' flanking region directed epimastigote-specific expression of reporter genes in the fly, but inhibited expression in bloodstream and procyclic forms. In contrast to an earlier report, we could not detect BARPs in bloodstream forms. We propose that BARPs form a stage-specific coat for epimastigote forms and suggest renaming them brucei alanine-rich proteins.
Resumo:
In experimental meningitis a single dose of gentamicin (10 mg/kg of body weight) led to gentamicin levels in around cerebrospinal fluid (CSF) of 4 mg/liter for 4 h, decreasing slowly to 2 mg/liter 4 h later. The CSF penetration of gentamicin ranged around 27%, calculated by comparison of areas under the curve (AUC in serum/AUC in CSF). Gentamicin monotherapy (-1.24 log(10) CFU/ml) was inferior to vancomycin monotherapy (-2.54 log(10) CFU/ml) over 8 h against penicillin-resistant pneumococci. However, the combination of vancomycin with gentamicin was significantly superior (-4.48 log(10) CFU/ml) compared to either monotherapy alone. The synergistic activity of vancomycin combined with gentamicin was also demonstrated in vitro in time-kill assays.
Resumo:
Linezolid, a new oxazolidinone antibiotic, showed good penetration (38+/-4%) into the meninges of rabbits with levels in the CSF ranging from 9.5 to 1.8 mg/L after two i.v. injections (20 mg/kg). Linezolid was clearly less effective than ceftriaxone against a penicillin-sensitive pneumococcal strain. Against a penicillin-resistant strain, linezolid had slightly inferior killing rates compared with the standard regimen (ceftriaxone combined with vancomycin). In vitro, linezolid was marginally bactericidal at concentrations above the MIC (5 x and 10 x MIC).
Resumo:
OBJECTIVE: Resonance frequency analysis (RFA) is a method of measuring implant stability. However, little is known about RFA of implants with long loading periods. The objective of the present study was to determine standard implant stability quotients (ISQs) for clinical successfully osseointegrated 1-stage implants in the edentulous mandible. MATERIALS AND METHODS: Stability measurements by means of RFA were performed in regularly followed patients who had received 1- stage implants for overdenture support. The time interval between implant placement and measurement ranged from 1 year up to 10 years. The short-term group comprised patients who were followed up to 5 years, while the long-term group included patients with an observation time of > 5 years up to 10 years. For further comparison RFA measurements were performed in a matching group with unloaded implants at the end of the surgical procedure. For statistical analysis various parameters that might influence the ISQs of loaded implants were included, and a mixed-effects model applied (regression analysis, P <.0125). RESULTS: Ninety-four patients were available with a total of 205 loaded implants, and 16 patients with 36 implants immediately after the surgical procedure. The mean ISQ of all measured implants was 64.5 +/- 7.9 (range, 58 to 72). Statistical analysis did not reveal significant differences in the mean ISQ related to the observation time. The parameters with overall statistical significance were the diameter of the implants and changes in the attachment level. In the short-term group, the gender and the clinically measured attachment level had a significant effect. Implant diameter had a significant effect in the long-term group. CONCLUSIONS: A mean ISQ of 64.5 +/- 7.9 was found to be representative for stable asymptomatic interforaminal implants measured by the RFA instrument at any given time point. No significant differences in ISQ values were found between implants with different postsurgical time intervals. Implant diameter appears to influence the ISQ of interforaminal implants.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.