909 resultados para Computational complexity.
Resumo:
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.
Resumo:
Computational fluid dynamics (CFD) and particle image velocimetry (PIV) are commonly used techniques to evaluate the flow characteristics in the development stage of blood pumps. CFD technique allows rapid change to pump parameters to optimize the pump performance without having to construct a costly prototype model. These techniques are used in the construction of a bi-ventricular assist device (BVAD) which combines the functions of LVAD and RVAD in a compact unit. The BVAD construction consists of two separate chambers with similar impellers, volutes, inlet and output sections. To achieve the required flow characteristics of an average flow rate of 5 l/min and different pressure heads (left – 100mmHg and right – 20mmHg), the impellers were set at different rotating speeds. From the CFD results, a six-blade impeller design was adopted for the development of the BVAD. It was also observed that the fluid can flow smoothly through the pump with minimum shear stress and area of stagnation which are related to haemolysis and thrombosis. Based on the compatible Reynolds number the flow through the model was calculated for the left and the right pumps. As it was not possible to have both the left and right chambers in the experimental model, the left and right pumps were tested separately.
Resumo:
Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend specifically, rising activity for unfamiliar stimuli—question the generality of repetition suppression and stir debate over the underlying neural mechanisms. This letter introduces a theory and computational model that extend existing theories and suggests that both trends are, in principle, the rising and falling parts of an inverted U-shaped dependence of activity with respect to stimulus novelty that may naturally emerge in a neural network with Hebbian learning and lateral inhibition. We further demonstrate that the proposed model is sufficient for the simulation of dissociable forms of repetition priming using real-world stimuli. The results of our simulation also suggest that the novelty of stimuli used in neuroscientific research must be assessed in a particularly cautious way. The potential importance of the inverted-U in stimulus processing and its relationship to the acquisition of knowledge and competencies in humans is also discussed
Resumo:
The focus of this paper is two-dimensional computational modelling of water flow in unsaturated soils consisting of weakly conductive disconnected inclusions embedded in a highly conductive connected matrix. When the inclusions are small, a two-scale Richards’ equation-based model has been proposed in the literature taking the form of an equation with effective parameters governing the macroscopic flow coupled with a microscopic equation, defined at each point in the macroscopic domain, governing the flow in the inclusions. This paper is devoted to a number of advances in the numerical implementation of this model. Namely, by treating the micro-scale as a two-dimensional problem, our solution approach based on a control volume finite element method can be applied to irregular inclusion geometries, and, if necessary, modified to account for additional phenomena (e.g. imposing the macroscopic gradient on the micro-scale via a linear approximation of the macroscopic variable along the microscopic boundary). This is achieved with the help of an exponential integrator for advancing the solution in time. This time integration method completely avoids generation of the Jacobian matrix of the system and hence eases the computation when solving the two-scale model in a completely coupled manner. Numerical simulations are presented for a two-dimensional infiltration problem.
Resumo:
RNase S is a complex consisting of two proteolytic fragments of RNase A: the S peptide (residues 1-20) and S protein (residues 21-124). RNase S and RNase A have very similar X-ray structures and enzymatic activities. previous experiments have shown increased rates of hydrogen exchange and greater sensitivity to tryptic cleavage for RNase S relative to RNase A. It has therefore been asserted that the RNase S complex is considerably more dynamically flexible than RNase A. In the present study we examine the differences in the dynamics of RNaseS and RNase A computationally, by MD simulations, and experimentally, using trypsin cleavage as a probe of dynamics. The fluctuations around the average solution structure during the simulation were analyzed by measuring the RMS deviation in coordinates. No significant differences between RNase S and RNase A dynamics were observed in the simulations. We were able to account for the apparent discrepancy between simulation and experiment by a simple model, According to this model, the experimentally observed differences in dynamics can be quantitatively explained by the small amounts of free S peptide and S protein that are present in equilibrium with the RNase S complex. Thus, folded RNase A and the RNase S complex have identical dynamic behavior, despite the presence of a break in polypeptide chain between residues 20 and 21 in the latter molecule. This is in contrast to what has been widely believed for over 30 years about this important fragment complementation system.
Resumo:
We consider the problem of deciding whether the output of a boolean circuit is determined by a partial assignment to its inputs. This problem is easily shown to be hard, i.e., co-Image Image -complete. However, many of the consequences of a partial input assignment may be determined in linear time, by iterating the following step: if we know the values of some inputs to a gate, we can deduce the values of some outputs of that gate. This process of iteratively deducing some of the consequences of a partial assignment is called propagation. This paper explores the parallel complexity of propagation, i.e., the complexity of determining whether the output of a given boolean circuit is determined by propagating a given partial input assignment. We give a complete classification of the problem into those cases that are Image -complete and those that are unlikely to be Image complete.
Resumo:
Cardiovascular disease is the leading causes of death in the developed world. Wall shear stress (WSS) is associated with the initiation and progression of atherogenesis. This study combined the recent advances in MR imaging and computational fluid dynamics (CFD) and evaluated the patient-specific carotid bifurcation. The patient was followed up for 3 years. The geometry changes (tortuosity, curvature, ICA/CCA area ratios, central to the cross-sectional curvature, maximum stenosis) and the CFD factors (Velocity distribute, Wall Shear Stress (WSS) and Oscillatory Shear Index (OSI)) were compared at different time points.The carotid stenosis was a slight increase in the central to the cross-sectional curvature, and it was minor and variable curvature changes for carotid centerline. The OSI distribution presents ahigh-values in the same region where carotid stenosis and normal border, indicating complex flow and recirculation.The significant geometric changes observed during the follow-up may also cause significant changes in bifurcation hemodynamics.
Resumo:
To help with the clinical screening and diagnosis of abdominal aortic aneurysm (AAA), we evaluated the effect of inflow angle (IA) and outflow bifurcation angle (BA) on the distribution of blood flow and wall shear stress (WSS) in an idealized AAA model. A 2D incompressible Newtonian flow is assumed and the computational simulation is performed using finite volume method. The results showed that the largest WSS often located at the proximal and the distal end of the AAA. An increase in IA resulted in an increase in maximum WSS. We also found that WSS was maximal when BA was 90°. IA and BA are two important geometrical factors, they may help with AAA risk assessment along with the commonly used AAA diameter.
Resumo:
Objective: To compare the differences in the hemodynamic parameters of abdominal aortic aneurysm (AAA) between fluid-structure interaction model (FSIM) and fluid-only model (FM), so as to discuss their application in the research of AAA. Methods: An idealized AAA model was created based on patient-specific AAA data. In FM, the flow, pressure and wall shear stress (WSS) were computed using finite volume method. In FSIM, an Arbitrary Lagrangian-Eulerian algorithm was used to solve the flow in a continuously deforming geometry. The hemodynamic parameters of both models were obtained for discussion. Results: Under the same inlet velocity, there were only two symmetrical vortexes in the AAA dilation area for FSIM. In contrast, four recirculation areas existed in FM; two were main vortexes and the other two were secondary flow, which were located between the main recirculation area and the arterial wall. Six local pressure concentrations occurred in the distal end of AAA and the recirculation area for FM. However, there were only two local pressure concentrations in FSIM. The vortex center of the recirculation area in FSIM was much more close to the distal end of AAA and the area was much larger because of AAA expansion. Four extreme values of WSS existed at the proximal of AAA, the point of boundary layer separation, the point of flow reattachment and the distal end of AAA, respectively, in both FM and FSIM. The maximum wall stress and the largest wall deformation were both located at the proximal and distal end of AAA. Conclusions: The number and center of the recirculation area for both models are different, while the change of vortex is closely associated with the AAA growth. The largest WSS of FSIM is 36% smaller than that of FM. Both the maximum wall stress and largest wall displacement shall increase with the outlet pressure increasing. FSIM needs to be considered for studying the relationship between AAA growth and shear stress.
Resumo:
Les histoires de l’art et du design ont délaissé, au cours desquatre dernières décennies, l’étude canonique des objets, des artistes/concepteurs et des styles et se sont tournées vers des recherches plus interdisciplinaires. Nous soutenons néanmoins que les historiens et historiennes du design doivent continuer de pousser leur utilisation d’approches puisant dans la culturelle matérielle et la criticalité afin de combler des lacunes dans l’histoire du design et de développer des méthodes et des approches pertinentes pour son étude. Puisant dans notre expérience d’enseignement auprès de la génération des « milléniaux », qui sont portés vers un « design militant », nous offrons des exemples pédagogiques qui ont aidé nos étudiants et étudiantes à assimiler des histoires du design responsables, engagées et réflexives et à comprendre la complexité et la criticalité du design.
Resumo:
The research in software science has so far been concentrated on three measures of program complexity: (a) software effort; (b) cyclomatic complexity; and (c) program knots. In this paper we propose a measure of the logical complexity of programs in terms of the variable dependency of sequence of computations, inductive effort in writing loops and complexity of data structures. The proposed complexity mensure is described with the aid of a graph which exhibits diagrammatically the dependence of a computation at a node upon the computation of other (earlier) nodes. Complexity measures of several example programs have been computed and the related issues have been discussed. The paper also describes the role played by data structures in deciding the program complexity.
Resumo:
To remain competitive, many agricultural systems are now being run along business lines. Systems methodologies are being incorporated, and here evolutionary computation is a valuable tool for identifying more profitable or sustainable solutions. However, agricultural models typically pose some of the more challenging problems for optimisation. This chapter outlines these problems, and then presents a series of three case studies demonstrating how they can be overcome in practice. Firstly, increasingly complex models of Australian livestock enterprises show that evolutionary computation is the only viable optimisation method for these large and difficult problems. On-going research is taking a notably efficient and robust variant, differential evolution, out into real-world systems. Next, models of cropping systems in Australia demonstrate the challenge of dealing with competing objectives, namely maximising farm profit whilst minimising resource degradation. Pareto methods are used to illustrate this trade-off, and these results have proved to be most useful for farm managers in this industry. Finally, land-use planning in the Netherlands demonstrates the size and spatial complexity of real-world problems. Here, GIS-based optimisation techniques are integrated with Pareto methods, producing better solutions which were acceptable to the competing organizations. These three studies all show that evolutionary computation remains the only feasible method for the optimisation of large, complex agricultural problems. An extra benefit is that the resultant population of candidate solutions illustrates trade-offs, and this leads to more informed discussions and better education of the industry decision-makers.
Resumo:
A computational algorithm (based on Smullyan's analytic tableau method) that varifies whether a given well-formed formula in propositional calculus is a tautology or not has been implemented on a DEC system 10. The stepwise refinement approch of program development used for this implementation forms the subject matter of this paper. The top-down design has resulted in a modular and reliable program package. This computational algoritlhm compares favourably with the algorithm based on the well-known resolution principle used in theorem provers.
Resumo:
The test based on comparison of the characteristic coefficients of the adjancency matrices of the corresponding graphs for detection of isomorphism in kinematic chains has been shown to fail in the case of two pairs of ten-link, simple-jointed chains, one pair corresponding to single-freedom chains and the other pair corresponding to three-freedom chains. An assessment of the merits and demerits of available methods for detection of isomorphism in graphs and kinematic chains is presented, keeping in view the suitability of the methods for use in computerized structural synthesis of kinematic chains. A new test based on the characteristic coefficients of the “degree” matrix of the corresponding graph is proposed for detection of isomorphism in kinematic chains. The new test is found to be successful in the case of a number of examples of graphs where the test based on characteristic coefficients of adjancency matrix fails. It has also been found to be successful in distinguishing the structures of all known simple-jointed kinematic chains in the categories of (a) single-freedom chains with up to 10 links, (b) two-freedom chains with up to 9 links and (c) three-freedom chains with up to 10 links.