892 resultados para agent-based modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field and laboratory observations have shown that a relatively low beach groundwater table enhances beach accretion. These observations have led to the beach dewatering technique (artificially lowering the beach water table) for combating beach erosion. Here we present a process-based numerical model that simulates the interacting wave motion on the beach. coastal groundwater flow, swash sediment transport and beach profile changes. Results of model simulations demonstrate that the model replicates accretionary effects of a low beach water table on beach profile changes and has the potential to become a tool for assessing the effectiveness of beach dewatering systems. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principle of using induction rules based on spatial environmental data to model a soil map has previously been demonstrated Whilst the general pattern of classes of large spatial extent and those with close association with geology were delineated small classes and the detailed spatial pattern of the map were less well rendered Here we examine several strategies to improve the quality of the soil map models generated by rule induction Terrain attributes that are better suited to landscape description at a resolution of 250 m are introduced as predictors of soil type A map sampling strategy is developed Classification error is reduced by using boosting rather than cross validation to improve the model Further the benefit of incorporating the local spatial context for each environmental variable into the rule induction is examined The best model was achieved by sampling in proportion to the spatial extent of the mapped classes boosting the decision trees and using spatial contextual information extracted from the environmental variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Enoxaparin was superior to unfractionated heparin (UFH), regardless of fibrinolytic agent in ST-elevation myocardial infarction (STEMI) patients receiving fibrinolytic therapy in ExTRACT-TIMI 25 (Enoxaparin and Thrombolysis Reperfusion for Acute Myocardial Infarction Treatment Thrombolysis in Myocardial Infarction 25) trial. Objective: This post hoc analysis compared outcomes with streptokinase plus enoxaparin to the standard regimen of fibrin-specific lytic (FSL) plus UFH and to the newer combination of FSL plus enoxaparin. Methods: In ExTRACT-TIMI 25, STEMI patients received either streptokinase or a FSL (alteplase, reteplase or tenecteplase) at the physician`s discretion and were randomized to enoxaparin or UFH, stratified by fibrinolytic type. Thirty-day outcomes were adjusted for baseline characteristics, region, in-hospital percutaneous coronary intervention (PCI) and a propensity score for the choice of lytic. Results: The primary trial endpoint of 30-day death/myocardial infarction (MI) occurred in fewer patients in the streptokinase-enoxaparin cohort (n = 2083) compared with FSL-UFH (n = 8141) [10.2% vs 12.0%, adjusted odds ratio [OR(adj)] 0.76; 95% CI 0.62, 0.93; p = 0.008]. Major bleeding was significantly increased with streptokinase-enoxaparin compared with FSL-UFH (ORadj 2.74; 95% CI 1.81; 4.14; p < 0.001) but intracranial haemorrhage (ICH) was similar (OR(adj) 0.90; 95% CI 0.40, 2.01; p = 0.79). Net clinical outcomes, defined as either death/MI/major bleeding or as death/MI/ICH tended to favour streptokinase-enoxaparin compared with FSL-UFH (OR(adj) 0.88; 95% CI 0.73, 1.06; p = 0.17; and OR(adj) 0.77; 95% CI 0.63, 0.93; p = 0.008, respectively). Patients receiving FSL-enoxaparin (n = 8142) and streptokinase-enoxaparin therapies experienced similar adjusted rates of the primary endpoint (OR(adj) 1.08; 95% CI 0.87, 1.32; p = 0.49) and net clinical outcomes. Conclusions: Our results suggest that fibrinolytic therapy with the combination of streptokinase and the potent anticoagulant agent enoxaparin resulted in similar adjusted outcomes compared with more costly regimens utilizing a FSL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is a report of the characterization of superparamagnetic iron oxide nanoparticles coated with silicone used as a contrast agent in magnetic resonance imaging of the gastrointestinal tract. The hydrodynamic size of the contrast agent is 281.2 rim, where it was determined by transmission electron microscopy and a Fe(3)O(4) crystalline structure was identified by X-ray diffraction, also confirmed by Mossbauer Spectroscopy. The blocking temperature of 190 K was determined from magnetic measurements based on the Zero Field Cooled and Field Cooled methods. The hysteresis loops were measured at different temperatures below and above the blocking temperature. Ferromagnetic resonance analysis indicated the superparamagnetic nature of the nanoparticles and a strong temperature dependence of the peak-to-peak linewidth Delta H(pp), giromagnetic factor g, number of spins N(S) and relaxation time T(2) were observed. This behavior can be attributed to an increase in the superexchange interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fogo selvagem (FS) is mediated by pathogenic, predominantly IgG4, anti-desmoglein 1 (Dsg1) autoantibodies and is endemic in Limao Verde, Brazil. IgG and IgG subclass autoantibodies were tested in a sample of 214 FS patients and 261 healthy controls by Dsg1 ELISA. For model selection, the sample was randomly divided into training (50%), validation (25%), and test (25%) sets. Using the training and validation sets, IgG4 was chosen as the best predictor of FS, with index values above 6.43 classified as FS. Using the test set, IgG4 has sensitivity of 92% (95% confidence interval (95% CI): 82-95%), specificity of 97% (95% CI: 89-100%), and area under the curve of 0.97 ( 95% CI: 0.94-1.00). The IgG4 positive predictive value (PPV) in Limao Verde (3% FS prevalence) was 49%. The sensitivity, specificity, and PPV of IgG anti-Dsg1 were 87, 91, and 23%, respectively. The IgG4-based classifier was validated by testing 11 FS patients before and after clinical disease and 60 Japanese pemphigus foliaceus patients. It classified 21 of 96 normal individuals from a Limao Verde cohort as having FS serology. On the basis of its PPV, half of the 21 individuals may currently have preclinical FS and could develop clinical disease in the future. Identifying individuals during preclinical FS will enhance our ability to identify the etiological agent(s) triggering FS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Renal failure is the most important comorbidity in patients with heart transplantation, it is associated with increased mortality. The major cause of renal dysfunction is the toxic effects of calcineurin inhibitors (CNI). Sirolimus, a proliferation signal inhibitor, is an imunossupressant recently introduced in cardiac transplantation. Its nonnephrotoxic properties make it an attractive immunosuppressive agent for patients with renal dysfunction. In this study, we evaluated the improvement in renal function after switching the CNI to sirolimus among patients with new-onset kidney dysfunction after heart transplantation. Methods. The study included orthotopic cardiac transplant (OHT) patients who required discontinuation of CNI due to worsening renal function (creatinine clearance <50 mL/min). We excluded subjects who had another indication for initiation of sirolimus, that is, rejection, malignancy, or allograft vasculopathy. The patients were followed for 6 months. The creatinine clearance (CrCl) was estimated according to the Cockcroft-Gault equation using the baseline weight and the serum creatinine at the time of introduction of sirolimus and 6 months there after. Nine patients were included, 7 (78%) were males and the overall mean age was 60.1 +/- 12.3 years and time since transplantation 8.7 +/- 6.1 years. The allograft was beyond 1 year in all patients. There was a significant improvement in the serum creatinine (2.98 +/- 0.9 to 1.69 +/- 0.5 mg/dL, P = .01) and CrCl (24.9 +/- 6.5 to 45.7 +/- 17.2 mL/min, P = .005) at 6 months follow-up. Conclusion. The replacement of CNI by sirolimus for imunosuppressive therapy for patients with renal failure after OHT was associated with a significant improvement in renal function after 6 months.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a practical application of MDA and reverse engineering based on a domain-specific modelling language. A well defined metamodel of a domain-specific language is useful for verification and validation of associated tools. We apply this approach to SIFA, a security analysis tool. SIFA has evolved as requirements have changed, and it has no metamodel. Hence, testing SIFA’s correctness is difficult. We introduce a formal metamodelling approach to develop a well-defined metamodel of the domain. Initially, we develop a domain model in EMF by reverse engineering the SIFA implementation. Then we transform EMF to Object-Z using model transformation. Finally, we complete the Object-Z model by specifying system behavior. The outcome is a well-defined metamodel that precisely describes the domain and the security properties that it analyses. It also provides a reliable basis for testing the current SIFA implementation and forward engineering its successor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A combination of modelling and analysis techniques was used to design a six component force balance. The balance was designed specifically for the measurement of impulsive aerodynamic forces and moments characteristic of hypervelocity shock tunnel testing using the stress wave force measurement technique. Aerodynamic modelling was used to estimate the magnitude and distribution of forces and finite element modelling to determine the mechanical response of proposed balance designs. Simulation of balance performance was based on aerodynamic loads and mechanical responses using convolution techniques. Deconvolution was then used to assess balance performance and to guide further design modifications leading to the final balance design. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new ball mill scale-up procedure is developed which uses laboratory data to predict the performance of MI-scale ball mill circuits. This procedure contains two laboratory tests. These laboratory tests give the data for the determination of the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of the full-scale mill circuit. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw. A worked example shows how the new ball mill scale-up procedure is executed. This worked example uses laboratory data to predict the performance of a full-scale re-grind mill circuit. This circuit consists of a ball mill in closed circuit with hydrocyclones. The MI-scale ball mill has a diameter (inside liners) of 1.85m. The scale-up procedure shows that the full-scale circuit produces a product (hydrocyclone overflow) that has an 80% passing size of 80 mum. The circuit has a recirculating load of 173%. The calculated power draw of the full-scale mill is 92kW (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.