982 resultados para Co-simulation
Resumo:
In this paper, dynamic modeling and simulation of the hydropurification reactor in a purified terephthalic acid production plant has been investigated by gray-box technique to evaluate the catalytic activity of palladium supported on carbon (0.5 wt.% Pd/C) catalyst. The reaction kinetics and catalyst deactivation trend have been modeled by employing artificial neural network (ANN). The network output has been incorporated with the reactor first principle model (FPM). The simulation results reveal that the gray-box model (FPM and ANN) is about 32 percent more accurate than FPM. The model demonstrates that the catalyst is deactivated after eleven months. Moreover, the catalyst lifetime decreases about two and half months in case of 7 percent increase of reactor feed flowrate. It is predicted that 10 percent enhancement of hydrogen flowrate promotes catalyst lifetime at the amount of one month. Additionally, the enhancement of 4-carboxybenzaldehyde concentration in the reactor feed improves CO and benzoic acid synthesis. CO is a poison to the catalyst, and benzoic acid might affect the product quality. The model can be applied into actual working plants to analyze the Pd/C catalyst efficient functioning and the catalytic reactor performance.
Resumo:
Two experimental studies were conducted to examine whether the stress-buffering effects of behavioral control on work task responses varied as a function of procedural information. Study 1 manipulated low and high levels of task demands, behavioral control, and procedural information for 128 introductory psychology students completing an in-basket activity. ANOVA procedures revealed a significant three-way interaction among these variables in the prediction of subjective task performance and task satisfaction. It was found that procedural information buffered the negative effects of task demands on ratings of performance and satisfaction only under conditions of low behavioral control. This pattern of results suggests that procedural information may have a compensatory effect when the work environment is characterized by a combination of high task demands and low behavioral control. Study 2 (N=256) utilized simple and complex versions of the in-basket activity to examine the extent to which the interactive relationship among task demands, behavioral control, and procedural information varied as a function of task complexity. There was further support for the stress-buffering role of procedural information on work task responses under conditions of low behavioral control. This effect was, however, only present when the in-basket activity was characterized by high task complexity, suggesting that the interactive relationship among these variables may depend on the type of tasks performed at work.
Resumo:
The co-curing process for advanced grid-stiffened (AGS) composite structure is a promising manufacturing process, which could reduce the manufacturing cost, augment the advantages and improve the performance of AGS composite structure. An improved method named soft-mold aided co-curing process which replaces the expansion molds by a whole rubber mold is adopted in this paper. This co-curing process is capable to co-cure a typical AGS composite structure with the manufacturer’s recommended cure cycle (MRCC). Numerical models are developed to evaluate the variation of temperature and the degree of cure in AGS composite structure during the soft-mold aided co-curing process. The simulation results were validated by experimental results obtained from embedded temperature sensors. Based on the validated modeling framework, the cycle of cure can be optimized by reducing more than half the time of MRCC while obtaining a reliable degree of cure. The shape and size effects of AGS composite structure on the distribution of temperature and degree of cure are also investigated to provide insights for the optimization of soft-mold aided co-curing process.
Resumo:
The rupture of atherosclerotic plaques is known to be associated with the stresses that act on or within the arterial wall. The extreme wall tensile stress (WTS) is usually recognized as a primary trigger for the rupture of vulnerable plaque. The present study used the in-vivo high-resolution multi-spectral magnetic resonance imaging (MRI) for carotid arterial plaque morphology reconstruction. Image segmentation of different plaque components was based on the multi-spectral MRI and co-registered with different sequences for the patient. Stress analysis was performed on totally four subjects with different plaque burden by fluid-structure interaction (FSI) simulations. Wall shear stress distributions are highly related to the degree of stenosis, while the level of its magnitude is much lower than the WTS in the fibrous cap. WTS is higher in the luminal wall and lower at the outer wall, with the lowest stress at the lipid region. Local stress concentrations are well confined in the thinner fibrous cap region, and usually locating in the plaque shoulder; the introduction of relative stress variation during a cycle in the fibrous cap can be a potential indicator for plaque fatigue process in the thin fibrous cap. According to stress analysis of the four subjects, a risk assessment in terms of mechanical factors could be made, which may be helpful in clinical practice. However, more subjects with patient specific analysis are desirable for plaque-stability study.
Resumo:
The widespread and increasing resistance of internal parasites to anthelmintic control is a serious problem for the Australian sheep and wool industry. As part of control programmes, laboratories use the Faecal Egg Count Reduction Test (FECRT) to determine resistance to anthelmintics. It is important to have confidence in the measure of resistance, not only for the producer planning a drenching programme but also for companies investigating the efficacy of their products. The determination of resistance and corresponding confidence limits as given in anthelmintic efficacy guidelines of the Standing Committee on Agriculture (SCA) is based on a number of assumptions. This study evaluated the appropriateness of these assumptions for typical data and compared the effectiveness of the standard FECRT procedure with the effectiveness of alternative procedures. Several sets of historical experimental data from sheep and goats were analysed to determine that a negative binomial distribution was a more appropriate distribution to describe pre-treatment helminth egg counts in faeces than a normal distribution. Simulated egg counts for control animals were generated stochastically from negative binomial distributions and those for treated animals from negative binomial and binomial distributions. Three methods for determining resistance when percent reduction is based on arithmetic means were applied. The first was that advocated in the SCA guidelines, the second similar to the first but basing the variance estimates on negative binomial distributions, and the third using Wadley’s method with the distribution of the response variate assumed negative binomial and a logit link transformation. These were also compared with a fourth method recommended by the International Co-operation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products (VICH) programme, in which percent reduction is based on the geometric means. A wide selection of parameters was investigated and for each set 1000 simulations run. Percent reduction and confidence limits were then calculated for the methods, together with the number of times in each set of 1000 simulations the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been said to occur. These simulations provide the basis for setting conditions under which the methods could be recommended. The authors show that given the distribution of helminth egg counts found in Queensland flocks, the method based on arithmetic not geometric means should be used and suggest that resistance be redefined as occurring when the upper level of percent reduction is less than 95%. At least ten animals per group are required in most circumstances, though even 20 may be insufficient where effectiveness of the product is close to the cut off point for defining resistance.
Resumo:
Thermonuclear fusion is a sustainable energy solution, in which energy is produced using similar processes as in the sun. In this technology hydrogen isotopes are fused to gain energy and consequently to produce electricity. In a fusion reactor hydrogen isotopes are confined by magnetic fields as ionized gas, the plasma. Since the core plasma is millions of degrees hot, there are special needs for the plasma-facing materials. Moreover, in the plasma the fusion of hydrogen isotopes leads to the production of high energetic neutrons which sets demanding abilities for the structural materials of the reactor. This thesis investigates the irradiation response of materials to be used in future fusion reactors. Interactions of the plasma with the reactor wall leads to the removal of surface atoms, migration of them, and formation of co-deposited layers such as tungsten carbide. Sputtering of tungsten carbide and deuterium trapping in tungsten carbide was investigated in this thesis. As the second topic the primary interaction of the neutrons in the structural material steel was examined. As model materials for steel iron chromium and iron nickel were used. This study was performed theoretically by the means of computer simulations on the atomic level. In contrast to previous studies in the field, in which simulations were limited to pure elements, in this work more complex materials were used, i.e. they were multi-elemental including two or more atom species. The results of this thesis are in the microscale. One of the results is a catalogue of atom species, which were removed from tungsten carbide by the plasma. Another result is e.g. the atomic distributions of defects in iron chromium caused by the energetic neutrons. These microscopic results are used in data bases for multiscale modelling of fusion reactor materials, which has the aim to explain the macroscopic degradation in the materials. This thesis is therefore a relevant contribution to investigate the connection of microscopic and macroscopic radiation effects, which is one objective in fusion reactor materials research.
Resumo:
This paper presents a numerical simulation of the well-documented, fluid-controlled Kabbal and Ponmudi type gneiss-chamockite transformations in southern India using a free energy minimization method. The computations have considered all the major solid phases and important fluid species in the rock - C-O-H and rock - C-O-H-N systems. Appropriate activity-composition relations for the solid solutions and equations of state for the fluids have been included in order to evaluate the mineral-fluid equilibria attending the incipient chamockite development in the gneisses. The C-O-H fluid speciation pattern in both the Kabbal and Ponmudi type systems indicates that CO2 and H2O make up the bulk of the fluid phase with CO, CH4, H-2 and O2 as minor constituents. In the graphite-buffered Ponmudi-system, the abundance of CO, CH4 and H-2 is orders of magnitude higher than that in the graphite-free Kabbal system. Simulation with C-O-H-N fluids of varying composition demonstrates the complementary role of CO2 and N2 as rather inert dilutants of H2O in the fluid phase. The simulation, carried out on available whole-rock data, has demonstrated the dependence of the transformation X(H2O) on P,T, and phase and chemical composition of the precursor gneiss.
Resumo:
The authors present the simulation of the tropical Pacific surface wind variability by a low-resolution (R15 horizontal resolution and 18 vertical levels) version of the Center for Ocean-Land-Atmosphere Interactions, Maryland, general circulation model (GCM) when forced by observed global sea surface temperature. The authors have examined the monthly mean surface winds acid precipitation simulated by the model that was integrated from January 1979 to March 1992. Analyses of the climatological annual cycle and interannual variability over the Pacific are presented. The annual means of the simulated zonal and meridional winds agree well with observations. The only appreciable difference is in the region of strong trade winds where the simulated zonal winds are about 15%-20% weaker than observed, The amplitude of the annual harmonics are weaker than observed over the intertropical convergence zone and the South Pacific convergence zone regions. The amplitudes of the interannual variation of the simulated zonal and meridional winds are close to those of the observed variation. The first few dominant empirical orthogonal functions (EOF) of the simulated, as well as the observed, monthly mean winds are found to contain a targe amount of high-frequency intraseasonal variations, While the statistical properties of the high-frequency modes, such as their amplitude and geographical locations, agree with observations, their detailed time evolution does not. When the data are subjected to a 5-month running-mean filter, the first two dominant EOFs of the simulated winds representing the low-frequency EI Nino-Southern Oscillation fluctuations compare quite well with observations. However, the location of the center of the westerly anomalies associated with the warm episodes is simulated about 15 degrees west of the observed locations. The model simulates well the progress of the westerly anomalies toward the eastern Pacific during the evolution of a warm event. The simulated equatorial wind anomalies are comparable in magnitude to the observed anomalies. An intercomparison of the simulation of the interannual variability by a few other GCMs with comparable resolution is also presented. The success in simulation of the large-scale low-frequency part of the tropical surface winds by the atmospheric GCM seems to be related to the model's ability to simulate the large-scale low-frequency part of the precipitation. Good correspondence between the simulated precipitation and the highly reflective cloud anomalies is seen in the first two EOFs of the 5-month running means. Moreover, the strong correlation found between the simulated precipitation and the simulated winds in the first two principal components indicates the primary role of model precipitation in driving the surface winds. The surface winds simulated by a linear model forced by the GCM-simulated precipitation show good resemblance to the GCM-simulated winds in the equatorial region. This result supports the recent findings that the large-scale part of the tropical surface winds is primarily linear.
Resumo:
In this paper, knowledge-based approach using Support Vector Machines (SVMs) are used for estimating the coordinated zonal settings of a distance relay. The approach depends on the detailed simulation studies of apparent impedance loci as seen by distance relay during disturbance, considering various operating conditions including fault resistance. In a distance relay, the impedance loci given at the relay location is obtained from extensive transient stability studies. SVMs are used as a pattern classifier for obtaining distance relay co-ordination. The scheme utilizes the apparent impedance values observed during a fault as inputs. An improved performance with the use of SVMs, keeping the reach when faced with different fault conditions as well as system power flow changes, are illustrated with an equivalent 265 bus system of a practical Indian Western Grid.
Resumo:
As System-on-Chip (SoC) designs migrate to 28nm process node and beyond, the electromagnetic (EM) co-interactions of the Chip-Package-Printed Circuit Board (PCB) becomes critical and require accurate and efficient characterization and verification. In this paper a fast, scalable, and parallelized boundary element based integral EM solutions to Maxwell equations is presented. The accuracy of the full-wave formulation, for complete EM characterization, has been validated on both canonical structures and real-world 3-D system (viz. Chip + Package + PCB). Good correlation between numerical simulation and measurement has been achieved. A few examples of the applicability of the formulation to high speed digital and analog serial interfaces on a 45nm SoC are also presented.
Resumo:
The sensitivity of combustion phasing and combustion descriptors to ignition timing, load and mixture quality on fuelling a multi-cylinder natural gas engine with bio-derived H-2 and CO rich syngas is addressed. While the descriptors for conventional fuels are well established and are in use for closed loop engine control, presence of H-2 in syngas potentially alters the mixture properties and hence combustion phasing, necessitating the current study. The ability of the descriptors to predict abnormal combustion, hitherto missing in the literature, is also addressed. Results from experiments using multi-cylinder engines and numerical studies using zero dimensional Wiebe function based simulation models are reported. For syngas with 20% H-2 and CO and 2% CH4 (producer gas), an ignition retard of 5 +/- 1 degrees was required compared to natural gas ignition timing to achieve peak load of 72.8 kWe. It is found that, for syngas, whose flammability limits are 0.42-1.93, the optimal engine operation was at an equivalence ratio of 1.12. The same methodology is extended to a two cylinder engine towards addressing the influence of syngas composition, especially H-2 fraction (varying from 13% to 37%), on the combustion phasing. The study confirms the utility of pressure trace derived combustion descriptors, except for the pressure trace first derivative, in describing the MBT operating condition of the engine when fuelled with an alternative fuel. Both experiments and analysis suggest most of the combustion descriptors to be independent of the engine load and mixture quality. A near linear relationship with ignition angle is observed. The general trend(s) of the combustion descriptors for syngas fuelled operation are similar to those of conventional fuels; the differences in sensitivity of the descriptors for syngas fuelled engine operation requires re-calibration of control logic for MBT conditions. Copyright (C) 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Resumo:
Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.
Resumo:
This paper presents a lower bound limit analysis approach for solving an axisymmetric stability problem by using the Drucker-Prager (D-P) yield cone in conjunction with finite elements and nonlinear optimization. In principal stress space, the tip of the yield cone has been smoothened by applying the hyperbolic approximation. The nonlinear optimization has been performed by employing an interior point method based on the logarithmic barrier function. A new proposal has also been given to simulate the D-P yield cone with the Mohr-Coulomb hexagonal yield pyramid. For the sake of illustration, bearing capacity factors N-c, N-q and N-gamma have been computed, as a function of phi, both for smooth and rough circular foundations. The results obtained from the analysis compare quite well with the solutions reported from literature.
Resumo:
An algebraic unified second-order moment (AUSM) turbulence-chemistry model of char combustion is introduced in this paper, to calculate the effect of particle temperature fluctuation on char combustion. The AUSM model is used to simulate gas-particle flows, in coal combustion in a pulverized coal combustor, together with a full two-fluid model for reacting gas-particle flows and coal combustion, including the sub-models as the k-epsilon-k(p) two-phase turbulence niodel, the EBU-Arrhenius volatile and CO combustion model, and the six-flux radiation model. A new method for calculating particle mass flow rate is also used in this model to correct particle outflow rate and mass flow rate for inside sections, which can obey the principle of mass conservation for the particle phase and can also speed up the iterating convergence of the computation procedure effectively. The simulation results indicate that, the AUSM char combustion model is more preferable to the old char combustion model, since the later totally eliminate the influence of particle temperature fluctuation on char combustion rate.
Resumo:
A full two-fluid model of reacting gas-particle flows with an algebraic unified second-order moment (AUSM) turbulence-chemistry model is used to simulate Beijing coal combustion and NOx formation. The sub-models are the k-epsilon-kp two-phase turbulence model, the EBU-Arrhenius volatile and CO combustion model, the six-flux radiation model, coal devolatilization model and char combustion model. The blocking effect on NOx formation is discussed. In addition, the chemical equilibrium analysis is used to predict NOx concentration at different temperature. Results of CID simulation and chemical equilibrium analysis show that, optimizing air dynamic parameters can delay the NOx formation and decrease NOx emission, but it is effective only in a restricted range. In order to decrease NOx emission near to zero, the re-burning or other chemical methods must be used.