866 resultados para categorization IT PFC computational neuroscience model HMAX
Resumo:
The erosion processes resulting from flow of fluids (gas-solid or liquid-solid) are encountered in nature and many industrial processes. The common feature of these erosion processes is the interaction of the fluid (particle) with its boundary thus resulting in the loss of material from the surface. This type of erosion in detrimental to the equipment used in pneumatic conveying systems. The puncture of pneumatic conveyor bends in industry causes several problems. Some of which are: (1) Escape of the conveyed product causing health and dust hazard; (2) Repairing and cleaning up after punctures necessitates shutting down conveyors, which will affect the operation of the plant, thus reducing profitability. The most common occurrence of process failure in pneumatic conveying systems is when pipe sections at the bends wear away and puncture. The reason for this is particles of varying speed, shape, size and material properties strike the bend wall with greater intensity than in straight sections of the pipe. Currently available models for predicting the lifetime of bends are inaccurate (over predict by 80%. The provision of an accurate predictive method would lead to improvements in the structure of the planned maintenance programmes of processes, thus reducing unplanned shutdowns and ultimately the downtime costs associated with these unplanned shutdowns. This is the main motivation behind the current research. The paper reports on two aspects of the first phases of the study-undertaken for the current project. These are (1) Development and implementation; and (2) Testing of the modelling environment. The model framework encompasses Computational Fluid Dynamics (CFD) related engineering tools, based on Eulerian (gas) and Lagrangian (particle) approaches to represent the two distinct conveyed phases, to predict the lifetime of conveyor bends. The method attempts to account for the effect of erosion on the pipe wall via particle impacts, taking into account the angle of attack, impact velocity, shape/size and material properties of the wall and conveyed material, within a CFD framework. Only a handful of researchers use CFD as the basis of predicting the particle motion, see for example [1-4] . It is hoped that this would lead to more realistic predictions of the wear profile. Results, for two, three-dimensional test cases using the commercially available CFD PHOENICS are presented. These are reported in relation to the impact intensity and sensitivity to the inlet particle distributions.
Resumo:
The modelling of diffusive terms in particle methods is a delicate matter and several models were proposed in the literature to take such terms into account. The diffusion velocity method (DVM), originally designed for the diffusion of passive scalars, turns diffusive terms into convective ones by expressing them as a divergence involving a so-called diffusion velocity. In this paper, DVM is extended to the diffusion of vectorial quantities in the three-dimensional Navier–Stokes equations, in their incompressible, velocity–vorticity formulation. The integration of a large eddy simulation (LES) turbulence model is investigated and a DVM general formulation is proposed. Either with or without LES, a novel expression of the diffusion velocity is derived, which makes it easier to approximate and which highlights the analogy with the original formulation for scalar transport. From this statement, DVM is then analysed in one dimension, both analytically and numerically on test cases to point out its good behaviour.
Resumo:
Adult anchovies in the Bay of Biscay perform north to south migration from late winter to early summer for spawning. However, what triggers and drives the geographic shift of the population remains unclear and poorly understood. An individual-based fish model has been implemented to explore the potential mechanisms that control anchovy's movement routes toward its spawning habitats. To achieve this goal, two fish movement behaviors – gradient detection through restricted area search and kinesis – simulated fish response to its dynamic environment. A bioenergetics model was used to represent individual growth and reproduction along the fish trajectory. The environmental forcing (food, temperature) of the model was provided by a coupled physical–biogeochemical model. We followed a hypothesis-testing strategy to actualize a series of simulations using different cues and computational assumptions. The gradient detection behavior was found as the most suitable mechanism to recreate the observed shift of anchovy distribution under the combined effect of sea-surface temperature and zooplankton. In addition, our results suggested that southward movement occurred more actively from early April to middle May following favorably the spatio-temporal evolution of zooplankton and temperature. In terms of fish bioenergetics, individuals who ended up in the southern part of the bay presented better condition based on energy content, proposing the resulting energy gain as an ecological explanation for this migration. The kinesis approach resulted in a moderate performance, producing distribution pattern with the highest spread. Finally, model performance was not significantly affected by changes on the starting date, initial fish distribution and number of particles used in the simulations, whereas it was drastically influenced by the adopted cues.
Resumo:
The steam turbines play a significant role in global power generation. Especially, research on low pressure (LP) steam turbine stages is of special importance for steam turbine man- ufactures, vendors, power plant owners and the scientific community due to their lower efficiency than the high pressure steam turbine stages. Because of condensation, the last stages of LP turbine experience irreversible thermodynamic losses, aerodynamic losses and erosion in turbine blades. Additionally, an LP steam turbine requires maintenance due to moisture generation, and therefore, it is also affecting on the turbine reliability. Therefore, the design of energy efficient LP steam turbines requires a comprehensive analysis of condensation phenomena and corresponding losses occurring in the steam tur- bine either by experiments or with numerical simulations. The aim of the present work is to apply computational fluid dynamics (CFD) to enhance the existing knowledge and understanding of condensing steam flows and loss mechanisms that occur due to the irre- versible heat and mass transfer during the condensation process in an LP steam turbine. Throughout this work, two commercial CFD codes were used to model non-equilibrium condensing steam flows. The Eulerian-Eulerian approach was utilised in which the mix- ture of vapour and liquid phases was solved by Reynolds-averaged Navier-Stokes equa- tions. The nucleation process was modelled with the classical nucleation theory, and two different droplet growth models were used to predict the droplet growth rate. The flow turbulence was solved by employing the standard k-ε and the shear stress transport k-ω turbulence models. Further, both models were modified and implemented in the CFD codes. The thermodynamic properties of vapour and liquid phases were evaluated with real gas models. In this thesis, various topics, namely the influence of real gas properties, turbulence mod- elling, unsteadiness and the blade trailing edge shape on wet-steam flows, are studied with different convergent-divergent nozzles, turbine stator cascade and 3D turbine stator-rotor stage. The simulated results of this study were evaluated and discussed together with the available experimental data in the literature. The grid independence study revealed that an adequate grid size is required to capture correct trends of condensation phenomena in LP turbine flows. The study shows that accurate real gas properties are important for the precise modelling of non-equilibrium condensing steam flows. The turbulence modelling revealed that the flow expansion and subsequently the rate of formation of liquid droplet nuclei and its growth process were affected by the turbulence modelling. The losses were rather sensitive to turbulence modelling as well. Based on the presented results, it could be observed that the correct computational prediction of wet-steam flows in the LP turbine requires the turbulence to be modelled accurately. The trailing edge shape of the LP turbine blades influenced the liquid droplet formulation, distribution and sizes, and loss generation. The study shows that the semicircular trailing edge shape predicted the smallest droplet sizes. The square trailing edge shape estimated greater losses. The analysis of steady and unsteady calculations of wet-steam flow exhibited that in unsteady simulations, the interaction of wakes in the rotor blade row affected the flow field. The flow unsteadiness influenced the nucleation and droplet growth processes due to the fluctuation in the Wilson point.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Thin film adhesion often determines microelectronic device reliability and it is therefore essential to have experimental techniques that accurately and efficiently characterize it. Laser-induced delamination is a novel technique that uses laser-generated stress waves to load thin films at high strain rates and extract the fracture toughness of the film/substrate interface. The effectiveness of the technique in measuring the interface properties of metallic films has been documented in previous studies. The objective of the current effort is to model the effect of residual stresses on the dynamic delamination of thin films. Residual stresses can be high enough to affect the crack advance and the mode mixity of the delimitation event, and must therefore be adequately modeled to make accurate and repeatable predictions of fracture toughness. The equivalent axial force and bending moment generated by the residual stresses are included in a dynamic, nonlinear finite element model of the delaminating film, and the impact of residual stresses on the final extent of the interfacial crack, the relative contribution of shear failure, and the deformed shape of the delaminated film is studied in detail. Another objective of the study is to develop techniques to address issues related to the testing of polymeric films. These type of films adhere well to silicon and the resulting crack advance is often much smaller than for metallic films, making the extraction of the interface fracture toughness more difficult. The use of an inertial layer which enhances the amount of kinetic energy trapped in the film and thus the crack advance is examined. It is determined that the inertial layer does improve the crack advance, although in a relatively limited fashion. The high interface toughness of polymer films often causes the film to fail cohesively when the crack front leaves the weakly bonded region and enters the strong interface. The use of a tapered pre-crack region that provides a more gradual transition to the strong interface is examined. The tapered triangular pre-crack geometry is found to be effective in reducing the stresses induced thereby making it an attractive option. We conclude by studying the impact of modifying the pre-crack geometry to enable the testing of multiple polymer films.
Resumo:
The central motif of this work is prediction and optimization in presence of multiple interacting intelligent agents. We use the phrase `intelligent agents' to imply in some sense, a `bounded rationality', the exact meaning of which varies depending on the setting. Our agents may not be `rational' in the classical game theoretic sense, in that they don't always optimize a global objective. Rather, they rely on heuristics, as is natural for human agents or even software agents operating in the real-world. Within this broad framework we study the problem of influence maximization in social networks where behavior of agents is myopic, but complication stems from the structure of interaction networks. In this setting, we generalize two well-known models and give new algorithms and hardness results for our models. Then we move on to models where the agents reason strategically but are faced with considerable uncertainty. For such games, we give a new solution concept and analyze a real-world game using out techniques. Finally, the richest model we consider is that of Network Cournot Competition which deals with strategic resource allocation in hypergraphs, where agents reason strategically and their interaction is specified indirectly via player's utility functions. For this model, we give the first equilibrium computability results. In all of the above problems, we assume that payoffs for the agents are known. However, for real-world games, getting the payoffs can be quite challenging. To this end, we also study the inverse problem of inferring payoffs, given game history. We propose and evaluate a data analytic framework and we show that it is fast and performant.
Resumo:
Dans cette thèse, nous abordons le contrôle moteur du mouvement du coude à travers deux approches expérimentales : une première étude psychophysique a été effectuée chez les sujets humains, et une seconde implique des enregistrements neurophysiologiques chez le singe. Nous avons recensé plusieurs aspects non résolus jusqu’à présent dans l’apprentissage moteur, particulièrement concernant l’interférence survenant lors de l’adaptation à deux ou plusieurs champs de force anti-corrélés. Nous avons conçu un paradigme où des stimuli de couleur aident les sujets à prédire la nature du champ de force externe actuel avant qu’ils ne l’expérimentent physiquement durant des mouvements d’atteinte. Ces connaissances contextuelles faciliteraient l’adaptation à des champs de forces en diminuant l’interférence. Selon le modèle computationnel de l’apprentissage moteur MOSAIC (MOdular Selection And Identification model for Control), les stimuli de couleur aident les sujets à former « un modèle interne » de chaque champ de forces, à s’en rappeler et à faire la transition entre deux champs de force différents, sans interférence. Dans l’expérience psychophysique, quatre groupes de sujets humains ont exécuté des mouvements de flexion/extension du coude contre deux champs de forces. Chaque force visqueuse était associée à une couleur de l’écran de l’ordinateur et les deux forces étaient anti-corrélées : une force résistante (Vr) a été associée à la couleur rouge de l’écran et l’autre, assistante (Va), à la couleur verte de l’écran. Les deux premiers groupes de sujets étaient des groupes témoins : la couleur de l’écran changeait à chaque bloc de 4 essais, tandis que le champ de force ne changeait pas. Les sujets du groupe témoin Va ne rencontraient que la force assistante Va et les sujets du groupe témoin Vr performaient leurs mouvements uniquement contre une force résistante Vr. Ainsi, dans ces deux groupes témoins, les stimuli de couleur n’étaient pas pertinents pour adapter le mouvement et les sujets ne s’adaptaient qu’à une seule force (Va ou Vr). Dans les deux groupes expérimentaux, cependant, les sujets expérimentaient deux champs de forces différents dans les différents blocs d’essais (4 par bloc), associés à ces couleurs. Dans le premier groupe expérimental (groupe « indice certain », IC), la relation entre le champ de force et le stimulus (couleur de l’écran) était constante. La couleur rouge signalait toujours la force Vr tandis que la force Va était signalée par la couleur verte. L’adaptation aux deux forces anti-corrélées pour le groupe IC s’est avérée significative au cours des 10 jours d’entraînement et leurs mouvements étaient presque aussi bien ajustés que ceux des deux groupes témoins qui n’avaient expérimenté qu’une seule des deux forces. De plus, les sujets du groupe IC ont rapidement démontré des changements adaptatifs prédictifs dans leurs sorties motrices à chaque changement de couleur de l’écran, et ceci même durant leur première journée d’entraînement. Ceci démontre qu’ils pouvaient utiliser les stimuli de couleur afin de se rappeler de la commande motrice adéquate. Dans le deuxième groupe expérimental, la couleur de l’écran changeait régulièrement de vert à rouge à chaque transition de blocs d’essais, mais le changement des champs de forces était randomisé par rapport aux changements de couleur (groupe « indice-incertain », II). Ces sujets ont pris plus de temps à s’adapter aux champs de forces que les 3 autres groupes et ne pouvaient pas utiliser les stimuli de couleurs, qui n’étaient pas fiables puisque non systématiquement reliés aux champs de forces, pour faire des changements prédictifs dans leurs sorties motrices. Toutefois, tous les sujets de ce groupe ont développé une stratégie ingénieuse leur permettant d’émettre une réponse motrice « par défaut » afin de palper ou de sentir le type de la force qu’ils allaient rencontrer dans le premier essai de chaque bloc, à chaque changement de couleur. En effet, ils utilisaient la rétroaction proprioceptive liée à la nature du champ de force afin de prédire la sortie motrice appropriée pour les essais qui suivent, jusqu’au prochain changement de couleur d’écran qui signifiait la possibilité de changement de force. Cette stratégie était efficace puisque la force demeurait la même dans chaque bloc, pendant lequel la couleur de l’écran restait inchangée. Cette étude a démontré que les sujets du groupe II étaient capables d’utiliser les stimuli de couleur pour extraire des informations implicites et explicites nécessaires à la réalisation des mouvements, et qu’ils pouvaient utiliser ces informations pour diminuer l’interférence lors de l’adaptation aux forces anti-corrélées. Les résultats de cette première étude nous ont encouragés à étudier les mécanismes permettant aux sujets de se rappeler d’habiletés motrices multiples jumelées à des stimuli contextuels de couleur. Dans le cadre de notre deuxième étude, nos expériences ont été effectuées au niveau neuronal chez le singe. Notre but était alors d’élucider à quel point les neurones du cortex moteur primaire (M1) peuvent contribuer à la compensation d’un large éventail de différentes forces externes durant un mouvement de flexion/extension du coude. Par cette étude, nous avons testé l’hypothèse liée au modèle MOSAIC, selon laquelle il existe plusieurs modules contrôleurs dans le cervelet qui peuvent prédire chaque contexte et produire un signal de sortie motrice approprié pour un nombre restreint de conditions. Selon ce modèle, les neurones de M1 recevraient des entrées de la part de plusieurs contrôleurs cérébelleux spécialisés et montreraient ensuite une modulation appropriée de la réponse pour une large variété de conditions. Nous avons entraîné deux singes à adapter leurs mouvements de flexion/extension du coude dans le cadre de 5 champs de force différents : un champ nul ne présentant aucune perturbation, deux forces visqueuses anti-corrélées (assistante et résistante) qui dépendaient de la vitesse du mouvement et qui ressemblaient à celles utilisées dans notre étude psychophysique chez l’homme, une force élastique résistante qui dépendait de la position de l’articulation du coude et, finalement, un champ viscoélastique comportant une sommation linéaire de la force élastique et de la force visqueuse. Chaque champ de force était couplé à une couleur d’écran de l’ordinateur, donc nous avions un total de 5 couleurs différentes associées chacune à un champ de force (relation fixe). Les singes étaient bien adaptés aux 5 conditions de champs de forces et utilisaient les stimuli contextuels de couleur pour se rappeler de la sortie motrice appropriée au contexte de forces associé à chaque couleur, prédisant ainsi leur sortie motrice avant de sentir les effets du champ de force. Les enregistrements d’EMG ont permis d’éliminer la possibilité de co-contractions sous-tendant ces adaptations, étant donné que le patron des EMG était approprié pour compenser chaque condition de champ de force. En parallèle, les neurones de M1 ont montré des changements systématiques dans leurs activités, sur le plan unitaire et populationnel, dans chaque condition de champ de force, signalant les changements requis dans la direction, l’amplitude et le décours temporel de la sortie de force musculaire nécessaire pour compenser les 5 conditions de champs de force. Les changements dans le patron de réponse pour chaque champ de force étaient assez cohérents entre les divers neurones de M1, ce qui suggère que la plupart des neurones de M1 contribuent à la compensation de toutes les conditions de champs de force, conformément aux prédictions du modèle MOSAIC. Aussi, cette modulation de l’activité neuronale ne supporte pas l’hypothèse d’une organisation fortement modulaire de M1.
Resumo:
Findings on the role that emotion plays in human behavior have transformed Artificial Intelligence computations. Modern research explores how to simulate more intelligent and flexible systems. Several studies focus on the role that emotion has in order to establish values for alternative decision and decision outcomes. For instance, Busemeyer et al. (2007) argued that emotional state affects the subjectivity value of alternative choice. However, emotional concepts in these theories are generally not defined formally and it is difficult to describe in systematic detail how processes work. In this sense, structures and processes cannot be explicitly implemented. Some attempts have been incorporated into larger computational systems that try to model how emotion affects human mental processes and behavior (Becker-Asano & Wachsmuth, 2008; Marinier, Laird & Lewis, 2009; Marsella & Gratch, 2009; Parkinson, 2009; Sander, Grandjean & Scherer, 2005). As we will see, some tutoring systems have explored this potential to inform user models. Likewise, dialogue systems, mixed-initiative planning systems, or systems that learn from observation could also benefit from such an approach (Dickinson, Brew & Meurers, 2013; Jurafsky & Martin, 2009). That is, considering emotion as interaction can be relevant in order to explain the dynamic role it plays in action and cognition (see Boehner et al., 2007).
Resumo:
Understanding the mode-locked response of excitable systems to periodic forcing has important applications in neuroscience. For example it is known that spatially extended place cells in the hippocampus are driven by the theta rhythm to generate a code conveying information about spatial location. Thus it is important to explore the role of neuronal dendrites in generating the response to periodic current injection. In this paper we pursue this using a compartmental model, with linear dynamics for each compartment, coupled to an active soma model that generates action potentials. By working with the piece-wise linear McKean model for the soma we show how the response of the whole neuron model (soma and dendrites) can be written in closed form. We exploit this to construct a stroboscopic map describing the response of the spatially extended model to periodic forcing. A linear stability analysis of this map, together with a careful treatment of the non-differentiability of the soma model, allows us to construct the Arnol'd tongue structure for 1:q states (one action potential for q cycles of forcing). Importantly we show how the presence of quasi-active membrane in the dendrites can influence the shape of tongues. Direct numerical simulations confirm our theory and further indicate that resonant dendritic membrane can enlarge the windows in parameter space for chaotic behavior. These simulations also show that the spatially extended neuron model responds differently to global as opposed to point forcing. In the former case spatio-temporal patterns of activity within an Arnol'd tongue are standing waves, whilst in the latter they are traveling waves.
Resumo:
With ‘GS Strategy 2025’ BASF Business Services GmbH was formed to centrally steer all IT related topics of BASF group. Thus, a global charging system has to be designed, which complies to international transfer price regulations and the strategy of BASF SE. This work project develops a charging system with a following evaluation. The direct charging system benefits from its cost transparency upsides but comes with a higher administrative effort due to volume-based charging. In contrast, the indirect charging system convinces because of easy handling, which is the result of the application of suitable allocation keys. Regarding the complex group structure of BASF SE with more than 300 legal entities in 80 countries, the lower administrative effort of the indirect charging system outweighs the benefits of the direct charging model and should be used by BASF group.