231 resultados para Computational Complexity
Resumo:
Computational fluid dynamics (CFD) and particle image velocimetry (PIV) are commonly used techniques to evaluate the flow characteristics in the development stage of blood pumps. CFD technique allows rapid change to pump parameters to optimize the pump performance without having to construct a costly prototype model. These techniques are used in the construction of a bi-ventricular assist device (BVAD) which combines the functions of LVAD and RVAD in a compact unit. The BVAD construction consists of two separate chambers with similar impellers, volutes, inlet and output sections. To achieve the required flow characteristics of an average flow rate of 5 l/min and different pressure heads (left – 100mmHg and right – 20mmHg), the impellers were set at different rotating speeds. From the CFD results, a six-blade impeller design was adopted for the development of the BVAD. It was also observed that the fluid can flow smoothly through the pump with minimum shear stress and area of stagnation which are related to haemolysis and thrombosis. Based on the compatible Reynolds number the flow through the model was calculated for the left and the right pumps. As it was not possible to have both the left and right chambers in the experimental model, the left and right pumps were tested separately.
Resumo:
Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend specifically, rising activity for unfamiliar stimuli—question the generality of repetition suppression and stir debate over the underlying neural mechanisms. This letter introduces a theory and computational model that extend existing theories and suggests that both trends are, in principle, the rising and falling parts of an inverted U-shaped dependence of activity with respect to stimulus novelty that may naturally emerge in a neural network with Hebbian learning and lateral inhibition. We further demonstrate that the proposed model is sufficient for the simulation of dissociable forms of repetition priming using real-world stimuli. The results of our simulation also suggest that the novelty of stimuli used in neuroscientific research must be assessed in a particularly cautious way. The potential importance of the inverted-U in stimulus processing and its relationship to the acquisition of knowledge and competencies in humans is also discussed
Resumo:
The focus of this paper is two-dimensional computational modelling of water flow in unsaturated soils consisting of weakly conductive disconnected inclusions embedded in a highly conductive connected matrix. When the inclusions are small, a two-scale Richards’ equation-based model has been proposed in the literature taking the form of an equation with effective parameters governing the macroscopic flow coupled with a microscopic equation, defined at each point in the macroscopic domain, governing the flow in the inclusions. This paper is devoted to a number of advances in the numerical implementation of this model. Namely, by treating the micro-scale as a two-dimensional problem, our solution approach based on a control volume finite element method can be applied to irregular inclusion geometries, and, if necessary, modified to account for additional phenomena (e.g. imposing the macroscopic gradient on the micro-scale via a linear approximation of the macroscopic variable along the microscopic boundary). This is achieved with the help of an exponential integrator for advancing the solution in time. This time integration method completely avoids generation of the Jacobian matrix of the system and hence eases the computation when solving the two-scale model in a completely coupled manner. Numerical simulations are presented for a two-dimensional infiltration problem.
Resumo:
Cardiovascular disease is the leading causes of death in the developed world. Wall shear stress (WSS) is associated with the initiation and progression of atherogenesis. This study combined the recent advances in MR imaging and computational fluid dynamics (CFD) and evaluated the patient-specific carotid bifurcation. The patient was followed up for 3 years. The geometry changes (tortuosity, curvature, ICA/CCA area ratios, central to the cross-sectional curvature, maximum stenosis) and the CFD factors (Velocity distribute, Wall Shear Stress (WSS) and Oscillatory Shear Index (OSI)) were compared at different time points.The carotid stenosis was a slight increase in the central to the cross-sectional curvature, and it was minor and variable curvature changes for carotid centerline. The OSI distribution presents ahigh-values in the same region where carotid stenosis and normal border, indicating complex flow and recirculation.The significant geometric changes observed during the follow-up may also cause significant changes in bifurcation hemodynamics.
Resumo:
To help with the clinical screening and diagnosis of abdominal aortic aneurysm (AAA), we evaluated the effect of inflow angle (IA) and outflow bifurcation angle (BA) on the distribution of blood flow and wall shear stress (WSS) in an idealized AAA model. A 2D incompressible Newtonian flow is assumed and the computational simulation is performed using finite volume method. The results showed that the largest WSS often located at the proximal and the distal end of the AAA. An increase in IA resulted in an increase in maximum WSS. We also found that WSS was maximal when BA was 90°. IA and BA are two important geometrical factors, they may help with AAA risk assessment along with the commonly used AAA diameter.
Resumo:
Objective: To compare the differences in the hemodynamic parameters of abdominal aortic aneurysm (AAA) between fluid-structure interaction model (FSIM) and fluid-only model (FM), so as to discuss their application in the research of AAA. Methods: An idealized AAA model was created based on patient-specific AAA data. In FM, the flow, pressure and wall shear stress (WSS) were computed using finite volume method. In FSIM, an Arbitrary Lagrangian-Eulerian algorithm was used to solve the flow in a continuously deforming geometry. The hemodynamic parameters of both models were obtained for discussion. Results: Under the same inlet velocity, there were only two symmetrical vortexes in the AAA dilation area for FSIM. In contrast, four recirculation areas existed in FM; two were main vortexes and the other two were secondary flow, which were located between the main recirculation area and the arterial wall. Six local pressure concentrations occurred in the distal end of AAA and the recirculation area for FM. However, there were only two local pressure concentrations in FSIM. The vortex center of the recirculation area in FSIM was much more close to the distal end of AAA and the area was much larger because of AAA expansion. Four extreme values of WSS existed at the proximal of AAA, the point of boundary layer separation, the point of flow reattachment and the distal end of AAA, respectively, in both FM and FSIM. The maximum wall stress and the largest wall deformation were both located at the proximal and distal end of AAA. Conclusions: The number and center of the recirculation area for both models are different, while the change of vortex is closely associated with the AAA growth. The largest WSS of FSIM is 36% smaller than that of FM. Both the maximum wall stress and largest wall displacement shall increase with the outlet pressure increasing. FSIM needs to be considered for studying the relationship between AAA growth and shear stress.
Resumo:
Les histoires de l’art et du design ont délaissé, au cours desquatre dernières décennies, l’étude canonique des objets, des artistes/concepteurs et des styles et se sont tournées vers des recherches plus interdisciplinaires. Nous soutenons néanmoins que les historiens et historiennes du design doivent continuer de pousser leur utilisation d’approches puisant dans la culturelle matérielle et la criticalité afin de combler des lacunes dans l’histoire du design et de développer des méthodes et des approches pertinentes pour son étude. Puisant dans notre expérience d’enseignement auprès de la génération des « milléniaux », qui sont portés vers un « design militant », nous offrons des exemples pédagogiques qui ont aidé nos étudiants et étudiantes à assimiler des histoires du design responsables, engagées et réflexives et à comprendre la complexité et la criticalité du design.
Resumo:
Layered graphitic materials exhibit new intriguing electronic structure and the search for new types of two-dimensional (2D) monolayer is of importance for the fabrication of next generation miniature electronic and optoelectronic devices. By means of density functional theory (DFT) computations, we investigated in detail the structural, electronic, mechanical and optical properties of the single-layer bismuth iodide (BiI3) nanosheet. Monolayer BiI3 is dynamically stable as confirmed by the computed phonon spectrum. The cleavage energy (Ecl) and interlayer coupling strength of bulk BiI3 are comparable to the experimental values of graphite, which indicates that the exfoliation of BiI3 is highly feasible. The obtained stress-strain curve shows that the BiI3 nanosheet is a brittle material with a breaking strain of 13%. The BiI3 monolayer has an indirect band gap of 1.57 eV with spin orbit coupling (SOC), indicating its potential application for solar cells. Furthermore, the band gap of BiI3 monolayer can be modulated by biaxial strain. Most interestingly, interfacing electrically active graphene with monolayer BiI3 nanosheet leads to enhanced light absorption compared to that in pure monolayer BiI3 nanosheet, highlighting its great potential applications in photonics and photovoltaic solar cells.
Resumo:
Large integration of solar Photo Voltaic (PV) in distribution network has resulted in over-voltage problems. Several control techniques are developed to address over-voltage problem using Deterministic Load Flow (DLF). However, intermittent characteristics of PV generation require Probabilistic Load Flow (PLF) to introduce variability in analysis that is ignored in DLF. The traditional PLF techniques are not suitable for distribution systems and suffer from several drawbacks such as computational burden (Monte Carlo, Conventional convolution), sensitive accuracy with the complexity of system (point estimation method), requirement of necessary linearization (multi-linear simulation) and convergence problem (Gram–Charlier expansion, Cornish Fisher expansion). In this research, Latin Hypercube Sampling with Cholesky Decomposition (LHS-CD) is used to quantify the over-voltage issues with and without the voltage control algorithm in the distribution network with active generation. LHS technique is verified with a test network and real system from an Australian distribution network service provider. Accuracy and computational burden of simulated results are also compared with Monte Carlo simulations.
Resumo:
This paper presents the design, implementation and evaluation of a collaborative learning activity designed to replace traditional face-to-face lectures in a large classroom. This activity aims to better engage the students with their learning and improve the students’ experience and outcomes. This project is implemented in the Fluid Mechanics unit of the Mechanical Engineering degree at the Queensland University of Technology to introduce students with the concept, terminology and process of Computational Fluid Dynamics (CFD). The approach integrates a constructive collaborative assignment which is a key element in the overall quality of teaching and learning, and an integral component of the students’ experience. A detailed survey, given to the students, showed an overall high level of satisfaction. However, the results also highlighted the gap between students’ expectations both for contents and assignment and teacher expectations. Discussions to address this issue are presented in the paper based on a critical reflection.
Resumo:
The concept of focus on opportunities describes how many new goals, options, and possibilities employees believe to have in their personal future at work. This study investigated the specific and shared effects of age, job complexity, and the use of successful aging strategies called selection, optimization, and compensation (SOC) in predicting focus on opportunities. Results of data collected from 133 employees of one company (mean age = 38 years, SD = 13, range 16–65 years) showed that age was negatively, and job complexity and use of SOC strategies were positively related to focus on opportunities. In addition, older employees in high-complexity jobs and older employees in low-complexity jobs with high use of SOC strategies were better able to maintain a focus on opportunities than older employees in low-complexity jobs with low use of SOC strategies.
Resumo:
Focus on opportunities is a cognitive-motivational facet of occupational future time perspective that describes how many new goals, options, and possibilities individuals expect to have in their personal work-related futures. This study examined focus on opportunities as a mediator of the relationships between age and work performance and between job complexity and work performance. In addition, it was expected that job complexity buffers the negative relationship between age and focus on opportunities and weakens the negative indirect effect of age on work performance. Results of mediation, moderation, and moderated mediation analyses with data collected from 168 employees in 41 organizations (mean age = 40.22 years, SD = 10.43, range = 19-64 years) as well as 168 peers providing work performance ratings supported the assumptions. The findings suggest that future studies on the role of age for work design and performance should take employees' focus on opportunities into account.
Resumo:
In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.
Resumo:
Having the ability to work with complex models can be highly beneficial, but the computational cost of doing so is often large. Complex models often have intractable likelihoods, so methods that directly use the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a good alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which assumes a multivariate normal approximation to the likelihood of a summary statistic of interest. This paper explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the exact working normal likelihood when the summary statistic has a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this paper.