986 resultados para PREDICTION SERVER
Resumo:
Venous thromboembolism (VTE) remains the leading cause of maternal mortality. Reports identified further research is required in obese and women post caesarean section (CS). Risk factors for VTE during pregnancy are periodically absent indicating the need for a simple and effective screening tool for pregnancy. Perturbation of the uteroplacental haemostasis has been implicated in placenta mediated pregnancy complications. This thesis had 4 main aims: 1) To investigate anticoagulant effects following a fixed thromboprophylaxis dose in healthy women post elective CS. 2) To evaluate the calibrated automated thrombogram (CAT) assay as a potential predictive tool for thrombosis in pregnancy. 3) To compare the anticoagulant effects of fixed versus weight adjusted thromboprophylaxis dose in morbidly obese pregnant women. 4) To investigate the LMWH effects on human haemostatic gene and antigen expression in placentae and plasma from the uteroplacental , maternal and fetal circulation. Tissue factor pathway inhibitor (TFPI), thrombin antithrombin (TAT), CAT and anti-Xa levels were analysed. Real-time PCR and ELISA were used to quantify mRNA and protein expression of TFPI and TF in placental tissue. In women post CS, anti-Xa levels do not reflect the full anticoagulant effects of LMWH. LMWH thromboprophylaxis in this healthy cohort of patients appears to have a sustained effect in reducing excess thrombin production post elective CS. The results of this study suggest that predicting VTE in pregnant women using CAT assay is not possible at present time. The prothrombotic state in pregnant morbidly obese women was substantially attenuated by weight adjusted but not at fixed LMWH doses. LMWH may be effective in reducing in- vivo thrombin production in the uteroplacental circulation of thrombophilic women. All these results collectively suggest that at appropriate dosage, LMWH is effective in attenuating excess thrombin generation, in low risk pregnant women post caesarean section or moderate to high risk pregnant women who are morbidly obese or tested positive for thrombophilia. The results of the studies provided data to inform evidence-based practice to improve the outcome for pregnant women at risk of thrombosis.
Resumo:
This paper considers forecasting the conditional mean and variance from a single-equation dynamic model with autocorrelated disturbances following an ARMA process, and innovations with time-dependent conditional heteroskedasticity as represented by a linear GARCH process. Expressions for the minimum MSE predictor and the conditional MSE are presented. We also derive the formula for all the theoretical moments of the prediction error distribution from a general dynamic model with GARCH(1, 1) innovations. These results are then used in the construction of ex ante prediction confidence intervals by means of the Cornish-Fisher asymptotic expansion. An empirical example relating to the uncertainty of the expected depreciation of foreign exchange rates illustrates the usefulness of the results. © 1992.
Resumo:
BACKGROUND: A major challenge in oncology is the selection of the most effective chemotherapeutic agents for individual patients, while the administration of ineffective chemotherapy increases mortality and decreases quality of life in cancer patients. This emphasizes the need to evaluate every patient's probability of responding to each chemotherapeutic agent and limiting the agents used to those most likely to be effective. METHODS AND RESULTS: Using gene expression data on the NCI-60 and corresponding drug sensitivity, mRNA and microRNA profiles were developed representing sensitivity to individual chemotherapeutic agents. The mRNA signatures were tested in an independent cohort of 133 breast cancer patients treated with the TFAC (paclitaxel, 5-fluorouracil, adriamycin, and cyclophosphamide) chemotherapy regimen. To further dissect the biology of resistance, we applied signatures of oncogenic pathway activation and performed hierarchical clustering. We then used mRNA signatures of chemotherapy sensitivity to identify alternative therapeutics for patients resistant to TFAC. Profiles from mRNA and microRNA expression data represent distinct biologic mechanisms of resistance to common cytotoxic agents. The individual mRNA signatures were validated in an independent dataset of breast tumors (P = 0.002, NPV = 82%). When the accuracy of the signatures was analyzed based on molecular variables, the predictive ability was found to be greater in basal-like than non basal-like patients (P = 0.03 and P = 0.06). Samples from patients with co-activated Myc and E2F represented the cohort with the lowest percentage (8%) of responders. Using mRNA signatures of sensitivity to other cytotoxic agents, we predict that TFAC non-responders are more likely to be sensitive to docetaxel (P = 0.04), representing a viable alternative therapy. CONCLUSIONS: Our results suggest that the optimal strategy for chemotherapy sensitivity prediction integrates molecular variables such as ER and HER2 status with corresponding microRNA and mRNA expression profiles. Importantly, we also present evidence to support the concept that analysis of molecular variables can present a rational strategy to identifying alternative therapeutic opportunities.
Resumo:
In this paper we present some early work concerned with the development of a simple solid fuel combustion model incorporated within a Computational Fluid Dynamics (CFD) framework. The model is intended for use in engineering applications of fire field modelling and represents an extension of this technique to situations involving the combustion of solid cellulosic hels A simple solid &el combustion model consisting of a thermal pyrolysis model, a six flux radiation model and an eddydissipation model for gaseous combustion have been developed and implemented within the CFD code CFDS-FLOW3D The model is briefly described and demonstrated through two applications involving fire spread in a compartment with a plywood lined ceiling. The two scenarios considered involve a fire in an open and closed compartment The model is shown to be able to qualitatively predict behaviours similar to flashover - in the case of the open room - and backdrafl - in the case of the initially closed room.
Resumo:
We continue the discussion of the decision points in the FUELCON metaarchitecture. Having discussed the relation of the original expert system to its sequel projects in terms of an AND/OR tree, we consider one further domain for a neural component: parameter prediction downstream of the core reload candidate pattern generator, thus, a replacement for the NOXER simulator currently in use in the project.
Resumo:
In this paper we present some work concerned with the development and testing of a simple solid fuel combustion model incorporated within a Computational Fluid Dynamics (CFD) framework. The model is intended for use in engineering applications of fire field modeling and represents an extension of this technique to situations involving the combustion of solid fuels. The CFD model is coupled with a simple thermal pyrolysis model for combustible solid noncharring fuels, a six-flux radiation model and an eddy-dissipation model for gaseous combustion. The model is then used to simulate a series of small-scale room fire experiments in which the target solid fuel is polymethylmethacrylate. The numerical predictions produced by this coupled model are found to be in very good agreement with experimental data. Furthermore, numerical predictions of the relationship between the air entrained into the fire compartment and the ventilation factor produce a characteristic linear correlation with constant of proportionality 0.38 kg/sm5/12. The simulation results also suggest that the model is capable of predicting the onset of "flashover" type behavior within the fire compartment.
Resumo:
This paper examines scheduling problems in which the setup phase of each operation needs to be attended by a single server, common for all jobs and different from the processing machines. The objective in each situation is to minimize the makespan. For the processing system consisting of two parallel dedicated machines we prove that the problem of finding an optimal schedule is NP-hard in the strong sense even if all setup times are equal or if all processing times are equal. For the case of m parallel dedicated machines, a simple greedy algorithm is shown to create a schedule with the makespan that is at most twice the optimum value. For the two machine case, an improved heuristic guarantees a tight worst-case ratio of 3/2. We also describe several polynomially solvable cases of the later problem. The two-machine flow shop and the open shop problems with a single server are also shown to be NP-hard in the strong sense. However, we reduce the two-machine flow shop no-wait problem with a single server to the Gilmore-Gomory traveling salesman problem and solve it in polynomial time. (c) 2000 John Wiley & Sons, Inc.
Resumo:
High-integrity castings require sophisticated design and manufacturing procedures to ensure they are essentially macrodefect free. Unfortunately, an important class of such defects—macroporosity, misruns, and pipe shrinkage—are all functions of the interactions of free surface flow, heat transfer, and solidication in complex geometries. Because these defects arise as an interaction of the preceding continuum phenomena, genuinely predictive models of these defects must represent these interactions explicitly. This work describes an attempt to model the formation of macrodefects explicitly as a function of the interacting continuum phenomena in arbitrarily complex three-dimensional geometries. The computational approach exploits a compatible set of finite volume procedures extended to unstructured meshes. The implementation of the model is described together with its testing and a measure of validation. The model demonstrates the potential to predict reliably shrinkage macroporosity, misruns, and pipe shrinkage directly as a result of interactions among free-surface fluid flow, heat transfer, and solidification.
Resumo:
The overall objective of this work is to develop a computational model of particle degradation during dilute-phasepneumatic conveying. A key feature of such a model is the prediction of particle breakage due to particle–wall collisions in pipeline bends. This paper presents a method for calculating particle impact degradation propensity under a range of particle velocities and particle sizes. It is based on interpolation on impact data obtained in a new laboratory-scale degradation tester. The method is tested and validated against experimental results for degradation at 90± impact angle of a full-size distribution sample of granulated sugar. In a subsequent work, the calculation of degradation propensity is coupled with a ow model of the solids and gas phases in the pipeline.
Resumo:
A complete model of particle impact degradation during dilute-phase pneumatic conveying is developed, which combines a degradation model, based on the experimental determination of breakage matrices, and a physical model of solids and gas flow in the pipeline. The solids flow in a straight pipe element is represented by a model consisting of two zones: a strand-type flow zone immediately downstream of a bend, followed by a fully suspended flow region after dispersion of the strand. The breakage matrices constructed from data on 90° angle single-impact tests are shown to give a good representation of the degradation occurring in a pipe bend of 90° angle. Numerical results are presented for degradation of granulated sugar in a large scale pneumatic conveyor.
Resumo:
This paper describes recent developments made to the stress analysis module within FLOTHERM, extending its capability to handle viscoplastic behavior. It also presents the validation of this approach and results obtained for an SMT resistor as an illustrative example. Lifetime predictions are made using the creep strain energy based models of Darveaux. Comment is made about the applicability of the damage model to the geometry of the joint under study.
Resumo:
Software metrics are the key tool in software quality management. In this paper, we propose to use support vector machines for regression applied to software metrics to predict software quality. In experiments we compare this method with other regression techniques such as Multivariate Linear Regression, Conjunctive Rule and Locally Weighted Regression. Results on benchmark dataset MIS, using mean absolute error, and correlation coefficient as regression performance measures, indicate that support vector machines regression is a promising technique for software quality prediction. In addition, our investigation of PCA based metrics extraction shows that using the first few Principal Components (PC) we can still get relatively good performance.
Resumo:
A numerical modeling method for the prediction of the lifetime of solder joints of relatively large solder area under cyclic thermal-mechanical loading conditions has been developed. The method is based on the Miner's linear damage accumulation rule and the properties of the accumulated plastic strain in front of the crack in large area solder joint. The nonlinear distribution of the damage indicator in the solder joints have been taken into account. The method has been used to calculate the lifetime of the solder interconnect in a power module under mixed cyclic loading conditions found in railway traction control applications. The results show that the solder thickness is a parameter that has a strong influence on the damage and therefore the lifetime of the solder joint while the substrate width and the thickness of the baseplate are much less important for the lifetime