994 resultados para parallel modeling
Resumo:
Chagas disease (American trypanosomiasis) is one of the most important parasitic diseases with serious social and economic impacts mainly on Latin America. This work reports the synthesis, in vitro trypanocidal evaluation, cytotoxicity assays, and molecular modeling and SAR/QSAR studies of a new series of N-phenylpyrazole benzylidene-carbohydrazides. The results pointed 6k (X = H, Y = p-NO(2), pIC(50) = 4.55 M) and 6l (X = F, Y = p-CN, pIC(50) = 4.27 M) as the most potent derivatives compared to crystal violet (pIC(50) = 3.77 M). The halogen-benzylidene-carbohydrazide presented the lowest potency whereas 6l showed the most promising pro. le with low toxicity (0% of cell death). The best equation from the 4D-QSAR analysis (Model 1) was able to explain 85% of the activity variability. The QSAR graphical representation revealed that bulky X-substituents decreased the potency whereas hydrophobic and hydrogen bond acceptor Y-substituents increased it. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Discrete element method (DEM) modeling is used in parallel with a model for coalescence of deformable surface wet granules. This produces a method capable of predicting both collision rates and coalescence efficiencies for use in derivation of an overall coalescence kernel. These coalescence kernels can then be used in computationally efficient meso-scale models such as population balance equation (PBE) models. A soft-sphere DEM model using periodic boundary conditions and a unique boxing scheme was utilized to simulate particle flow inside a high-shear mixer. Analysis of the simulation results provided collision frequency, aggregation frequency, kinetic energy, coalescence efficiency and compaction rates for the granulation process. This information can be used to bridge the gap in multi-scale modeling of granulation processes between the micro-scale DEM/coalescence modeling approach and a meso-scale PBE modeling approach.
Resumo:
Business process design is primarily driven by process improvement objectives. However, the role of control objectives stemming from regulations and standards is becoming increasingly important for businesses in light of recent events that led to some of the largest scandals in corporate history. As organizations strive to meet compliance agendas, there is an evident need to provide systematic approaches that assist in the understanding of the interplay between (often conflicting) business and control objectives during business process design. In this paper, our objective is twofold. We will firstly present a research agenda in the space of business process compliance, identifying major technical and organizational challenges. We then tackle a part of the overall problem space, which deals with the effective modeling of control objectives and subsequently their propagation onto business process models. Control objective modeling is proposed through a specialized modal logic based on normative systems theory, and the visualization of control objectives on business process models is achieved procedurally. The proposed approach is demonstrated in the context of a purchase-to-pay scenario.
Resumo:
This paper presents the recent finding by Muhlhaus et al [1] that bifurcation of crack growth patterns exists for arrays of two-dimensional cracks. This bifurcation is a result of the nonlinear effect due to crack interaction, which is, in the present analysis, approximated by the dipole asymptotic or pseudo-traction method. The nonlinear parameter for the problem is the crack length/ spacing ratio lambda = a/h. For parallel and edge crack arrays under far field tension, uniform crack growth patterns (all cracks having same size) yield to nonuniform crack growth patterns (i.e. bifurcation) if lambda is larger than a critical value lambda(cr) (note that such bifurcation is not found for collinear crack arrays). For parallel and edge crack arrays respectively, the value of lambda(cr) decreases monotonically from (2/9)(1/2) and (2/15.096)(1/2) for arrays of 2 cracks, to (2/3)(1/2)/pi and (2/5.032)(1/2)/pi for infinite arrays of cracks. The critical parameter lambda(cr) is calculated numerically for arrays of up to 100 cracks, whilst discrete Fourier transform is used to obtain the exact solution of lambda(cr) for infinite crack arrays. For geomaterials, bifurcation can also occurs when array of sliding cracks are under compression.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
Ex vivo hematopoiesis is increasingly used for clinical applications. Models of ex vivo hematopoiesis are required to better understand the complex dynamics and to optimize hematopoietic culture processes. A general mathematical modeling framework is developed which uses traditional chemical engineering metaphors to describe the complex hematopoietic dynamics. Tanks and tubular reactors are used to describe the (pseudo-) stochastic and deterministic elements of hematopoiesis, respectively. Cells at any point in the differentiation process can belong to either an immobilized, inert phase (quiescent cells) or a mobile, active phase (cycling cells). The model describes five processes: (1) flow (differentiation), (2) autocatalytic formation (growth),(3) degradation (death), (4) phase transition from immobilized to mobile phase (quiescent to cycling transition), and (5) phase transition from mobile to immobilized phase (cycling to quiescent transition). The modeling framework is illustrated with an example concerning the effect of TGF-beta 1 on erythropoiesis. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Coset enumeration is a most important procedure for investigating finitely presented groups. We present a practical parallel procedure for coset enumeration on shared memory processors. The shared memory architecture is particularly interesting because such parallel computation is both faster and cheaper. The lower cost comes when the program requires large amounts of memory, and additional CPU's. allow us to lower the time that the expensive memory is being used. Rather than report on a suite of test cases, we take a single, typical case, and analyze the performance factors in-depth. The parallelization is achieved through a master-slave architecture. This results in an interesting phenomenon, whereby the CPU time is divided into a sequential and a parallel portion, and the parallel part demonstrates a speedup that is linear in the number of processors. We describe an early version for which only 40% of the program was parallelized, and we describe how this was modified to achieve 90% parallelization while using 15 slave processors and a master. In the latter case, a sequential time of 158 seconds was reduced to 29 seconds using 15 slaves.
Resumo:
A new model proposed for the gasification of chars and carbons incorporates features of the turbostratic nanoscale structure that exists in such materials. The model also considers the effect of initial surface chemistry and different reactivities perpendicular to the edges and to the faces of the underlying crystallite planes comprising the turbostratic structure. It may be more realistic than earlier models based on pore or grain structure idealizations when the carbon contains large amounts of crystallite matter. Shrinkage of the carbon particles in the chemically controlled regime is also possible due to the random complete gasification of crystallitic planes. This mechanism can explain observations in the literature of particle size reduction. Based on the model predictions, both initial surface chemistry and the number of stacked planes in the crystallites strongly influence the reactivity and particle shrinkage. Its test results agree well with literature data on the air-oxidation of Spherocarb and show that it accurately predicts the variation of particle size with conversion. Model parameters are determined entirely from rate measurements.
Resumo:
An extension of the Adachi model with the adjustable broadening function, instead of the Lorentzian one, is employed to model the optical constants of GaP, InP, and InAs. Adjustable broadening is modeled by replacing the damping constant with the frequency-dependent expression. The improved flexibility of the model enables achieving an excellent agreement with the experimental data. The relative rms errors obtained for the refractive index equal 1.2% for GaP, 1.0% for InP, and 1.6% for InAs. (C) 1999 American Institute of Physics. [S0021-8979(99)05807-7].
Resumo:
An analytical approach to the stress development in the coherent dendritic network during solidification is proposed. Under the assumption that stresses are developed in the network as a result of the friction resisting shrinkage-induced interdendritic fluid flow, the model predicts the stresses in the solid. The calculations reflect the expected effects of postponed dendrite coherency, slower solidification conditions, and variations of eutectic volume fraction and shrinkage. Comparing the calculated stresses to the measured shear strength of equiaxed mushy zones shows that it is possible for the stresses to exceed the strength, thereby resulting in reorientation or collapse of the dendritic network.
Resumo:
The extension of Adachi's model with a Gaussian-like broadening function, in place of Lorentzian, is used to model the optical dielectric function of the alloy AlxGa1-xAs. Gaussian-like broadening is accomplished by replacing the damping constant in the Lorentzian line shape with a frequency dependent expression. In this way, the comparative simplicity of the analytic formulas of the model is preserved, while the accuracy becomes comparable to that of more intricate models, and/or models with significantly more parameters. The employed model accurately describes the optical dielectric function in the spectral range from 1.5 to 6.0 eV within the entire alloy composition range. The relative rms error obtained for the refractive index is below 2.2% for all compositions. (C) 1999 American Institute of Physics. [S0021-8979(99)00512-5].
Resumo:
Optical constants of AlSb, GaSb, and InSb are modeled in the 1-6 eV spectral range. We employ an extension of Adachi's model of the optical constants of semiconductors. The model takes into account transitions at E-0, E-0 + Delta(0), E-1, and E-1 + Delta(1) critical points, as well as higher-lying transitions which are modeled with three damped harmonic oscillators. We do not consider indirect transitions contribution, since it represents a second-order perturbation and its strength should be low. Also, we do not take into account excitonic effects at E-1, E-1 + Delta(1) critical points, since we model the room temperature data. In spite of fewer contributions to the dielectric function compared to previous calculations involving Adachi's model, our calculations show significantly improved agreement with the experimental data. This is due to the two main distinguishing features of calculations presented here: use of adjustable line broadening instead of the conventional Lorentzian one, and employment of a global optimization routine for model parameter determination.
Resumo:
The conventional convection-dispersion (also called axial dispersion) model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. An extended form of the convection-dispersion model has been developed to adequately describe the outflow concentration-time profiles for vascular markers at both short and long times after bolus injections into perfused livers. The model, based on flux concentration and a convolution of catheters and large vessels, assumes that solute elimination in hepatocytes follows either fast distribution into or radial diffusion in hepatocytes. The model includes a secondary vascular compartment, postulated to be interconnecting sinusoids. Analysis of the mean hepatic transit time (MTT) and normalized variance (CV2) of solutes with extraction showed that the discrepancy between the predictions of MTT and CV2 for the extended and conventional models are essentially identical irrespective of the magnitude of rate constants representing permeability, volume, and clearance parameters, providing that there is significant hepatic extraction. In conclusion, the application of a newly developed extended convection-dispersion model has shown that the unweighted conventional convection-dispersion model can be used to describe the disposition of extracted solutes and, in particular, to estimate hepatic availability and clearance in booth experimental and clinical situations.
Resumo:
In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Resumo:
An extensive research program focused on the characterization of various metallurgical complex smelting and coal combustion slags is being undertaken. The research combines both experimental and thermodynamic modeling studies. The approach is illustrated by work on the PbO-ZnO-Al2O3-FeO-Fe2O3-CaO-SiO2 system. Experimental measurements of the liquidus and solidus have been undertaken under oxidizing and reducing conditions using equilibration, quenching, and electron probe X-ray microanalysis. The experimental program has been planned so as to obtain data for thermodynamic model development as well as for pseudo-ternary Liquidus diagrams that can be used directly by process operators. Thermodynamic modeling has been carried out using the computer system FACT, which contains thermodynamic databases with over 5000 compounds and evaluated solution models. The FACT package is used for the calculation of multiphase equilibria in multicomponent systems of industrial interest. A modified quasi-chemical solution model is used for the liquid slag phase. New optimizations have been carried out, which significantly improve the accuracy of the thermodynamic models for lead/zinc smelting and coal combustion processes. Examples of experimentally determined and calculated liquidus diagrams are presented. These examples provide information of direct relevance to various metallurgical smelting and coal combustion processes.