934 resultados para static computer simulation
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
On yleisesti tiedossa, että väsyttävän kuormituksen alaisena olevat hitsatut rakenteet rikkoutuvat juuri hitsausliitoksista. Täyden tunkeuman hitsausliitoksia sisältävien rakenteiden asiantunteva suunnittelu janykyaikaiset valmistusmenetelmät ovat lähes eliminoineet väsymisvauriot hitsatuissa rakenteissa. Väsymislujuuden parantaminen tiukalla täyden tunkeuman vaatimuksella on kuitenkin epätaloudellinen ratkaisu. Täyden tunkeuman hitsausliitoksille asetettavien laatuvaatimuksien on määriteltävä selkeät tarkastusohjeet ja hylkäämisperusteet. Tämän diplomityön tarkoituksena oli tutkia geometristen muuttujien vaikutusta kuormaa kantavien hitsausliitosten väsymislujuuteen. Huomio kiinnitettiin pääasiassa suunnittelumuuttujiin, joilla on vaikutusta väsymisvaurioiden syntymiseen hitsauksen juuren puolella. Nykyiset määräykset ja standardit, jotka perustuvat kokeellisiin tuloksiin; antavat melko yleisiä ohjeita hitsausliitosten väsymismitoituksesta. Tämän vuoksi muodostettiin kokonaan uudet parametriset yhtälöt sallitun nimellisen jännityksen kynnysarvon vaihteluvälin, ¿¿th, laskemiseksi, jotta vältettäisiin hitsausliitosten juuren puoleiset väsymisvauriot. Lisäksi, jokaiselle liitostyypille laskettiin hitsin juuren puolen väsymisluokat (FAT), joita verrattiin olemassa olevilla mitoitusohjeilla saavutettuihin tuloksiin. Täydentäviksi referensseiksi suoritettiin useita kolmiulotteisia (3D) analyysejä. Julkaistuja kokeellisiin tuloksiin perustuvia tietoja käytettiin apuna hitsausliitosten väsymiskäyttäytymisen ymmärtämiseksi ja materiaalivakioiden määrittämiseksi. Kuormaa kantavien vajaatunkeumaisten hitsausliitosten väsymislujuus määritettiin käyttämällä elementtimenetelmää. Suurimman pääjännityksen kriteeriä hyödynnettiin murtumiskäyttäytymisen ennakoimiseksi. Valitulle hitsatulle materiaalille ja koeolosuhteille murtumiskäyttäytymistä mallinnettiin särön kasvunopeudella da/dN ja jännitysintensiteettikertoimen vaihteluvälillä, 'K. Paris:n yhtälön numeerinen integrointi suoritettiin FRANC2D/L tietokoneohjelmalla. Saatujen tulosten perusteella voidaan laskea FAT tutkittavassa tapauksessa. ¿¿th laskettiin alkusärön jännitysintensiteettikertoimen vaihteluvälin ja kynnysjännitysintensiteettikertoimen, 'Kth, perusteella. ¿Kth arvoa pienemmällä vaihteluvälillä särö ei kasva. Analyyseissäoletuksena oli hitsattu jälkikäsittelemätön liitos, jossa oli valmis alkusärö hitsin juuressa. Analyysien tulokset ovat hyödyllisiä suunnittelijoille, jotka tekevät päätöksiä koskien geometrisiä parametreja, joilla on vaikutusta hitsausliitosten väsymislujuuteen.
Resumo:
The flexibility of different regions of HIV-1 protease was examined by using a database consisting of 73 X-ray structures that differ in terms of sequence, ligands or both. The root-mean-square differences of the backbone for the set of structures were shown to have the same variation with residue number as those obtained from molecular dynamics simulations, normal mode analyses and X-ray B-factors. This supports the idea that observed structural changes provide a measure of the inherent flexibility of the protein, although specific interactions between the protease and the ligand play a secondary role. The results suggest that the potential energy surface of the HIV-1 protease is characterized by many local minima with small energetic differences, some of which are sampled by the different X-ray structures of the HIV-1 protease complexes. Interdomain correlated motions were calculated from the structural fluctuations and the results were also in agreement with molecular dynamics simulations and normal mode analyses. Implications of the results for the drug-resistance engendered by mutations are discussed briefly.
Resumo:
La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
The dynamical properties ofshaken granular materials are important in many industrial applications where the shaking is used to mix, segregate and transport them. In this work asystematic, large scale simulation study has been performed to investigate the rheology of dense granular media, in the presence of gas, in a three dimensional vertical cylinder filled with glass balls. The base wall of the cylinder is subjected to sinusoidal oscillation in the vertical direction. The viscoelastic behavior of glass balls during a collision, have been studied experimentally using a modified Newton's Cradle device. By analyzing the results of the measurements, using numerical model based on finite element method, the viscous damping coefficient was determinedfor the glass balls. To obtain detailed information about the interparticle interactions in a shaker, a simplified model for collision between particles of a granular material was proposed. In order to simulate the flow of surrounding gas, a formulation of the equations for fluid flow in a porous medium including particle forces was proposed. These equations are solved with Large Eddy Simulation (LES) technique using a subgrid-model originally proposed for compressible turbulent flows. For a pentagonal prism-shaped container under vertical vibrations, the results show that oscillon type structures were formed. Oscillons are highly localized particle-like excitations of the granular layer. This self-sustaining state was named by analogy with its closest large-scale analogy, the soliton, which was first documented by J.S. Russell in 1834. The results which has been reportedbyBordbar and Zamankhan(2005b)also show that slightly revised fluctuation-dissipation theorem might apply to shaken sand, which appears to be asystem far from equilibrium and could exhibit strong spatial and temporal variations in quantities such as density and local particle velocity. In this light, hydrodynamic type continuum equations were presented for describing the deformation and flow of dense gas-particle mixtures. The constitutive equation used for the stress tensor provides an effective viscosity with a liquid-like character at low shear rates and a gaseous-like behavior at high shear rates. The numerical solutions were obtained for the aforementioned hydrodynamic equations for predicting the flow dynamics ofdense mixture of gas and particles in vertical cylindrical containers. For a heptagonal prism shaped container under vertical vibrations, the model results were found to predict bubbling behavior analogous to those observed experimentally. This bubbling behavior may be explained by the unusual gas pressure distribution found in the bed. In addition, oscillon type structures were found to be formed using a vertically vibrated, pentagonal prism shaped container in agreement with computer simulation results. These observations suggest that the pressure distribution plays a key rolein deformation and flow of dense mixtures of gas and particles under vertical vibrations. The present models provide greater insight toward the explanation of poorly understood hydrodynamic phenomena in the field of granular flows and dense gas-particle mixtures. The models can be generalized to investigate the granular material-container wall interactions which would be an issue of high interests in the industrial applications. By following this approach ideal processing conditions and powder transport can be created in industrial systems.
Resumo:
Static process simulation has traditionally been used to model complex processes for various purposes. However, the use of static processsimulators for the preparation of holistic examinations aiming at improving profit-making capability requires a lot of work because the production of results requires the assessment of the applicability of detailed data which may be irrelevant to the objective. The relevant data for the total assessment gets buried byirrelevant data. Furthermore, the models do not include an examination of the maintenance or risk management, and economic examination is often an extra property added to them which can be performed with a spreadsheet program. A process model applicable to holistic economic examinations has been developed in this work. The model is based on the life cycle profit philosophy developed by Hagberg and Henriksson in 1996. The construction of the model has utilized life cycle assessment and life cycle costing methodologies with a view to developing, above all, a model which would be applicable to the economic examinations of complete wholes and which would require the need for information focusing on aspects essential to the objectives. Life cycle assessment and costing differ from each other in terms of the modeling principles, but the features of bothmethodologies can be used in the development of economic process modeling. Methods applicable to the modeling of complex processes can be examined from the viewpoint of life cycle methodologies, because they involve the collection and management of large corpuses of information and the production of information for the needs of decision-makers as well. The results of the study shows that on the basis of the principles of life cycle modeling, a process model can be created which may be used to produce holistic efficiency examinations on the profit-making capability of the production line, with fewer resources thanwith traditional methods. The calculations of the model are based to the maximum extent on the information system of the factory, which means that the accuracyof the results can be improved by developing information systems so that they can provide the best information for this kind of examinations.
Resumo:
The purpose of this study was to investigate some important features of granular flows and suspension flows by computational simulation methods. Granular materials have been considered as an independent state ofmatter because of their complex behaviors. They sometimes behave like a solid, sometimes like a fluid, and sometimes can contain both phases in equilibrium. The computer simulation of dense shear granular flows of monodisperse, spherical particles shows that the collisional model of contacts yields the coexistence of solid and fluid phases while the frictional model represents a uniform flow of fluid phase. However, a comparison between the stress signals from the simulations and experiments revealed that the collisional model would result a proper match with the experimental evidences. Although the effect of gravity is found to beimportant in sedimentation of solid part, the stick-slip behavior associated with the collisional model looks more similar to that of experiments. The mathematical formulations based on the kinetic theory have been derived for the moderatesolid volume fractions with the assumption of the homogeneity of flow. In orderto make some simulations which can provide such an ideal flow, the simulation of unbounded granular shear flows was performed. Therefore, the homogeneous flow properties could be achieved in the moderate solid volume fractions. A new algorithm, namely the nonequilibrium approach was introduced to show the features of self-diffusion in the granular flows. Using this algorithm a one way flow can beextracted from the entire flow, which not only provides a straightforward calculation of self-diffusion coefficient but also can qualitatively determine the deviation of self-diffusion from the linear law at some regions nearby the wall inbounded flows. Anyhow, the average lateral self-diffusion coefficient, which was calculated by the aforementioned method, showed a desirable agreement with thepredictions of kinetic theory formulation. In the continuation of computer simulation of shear granular flows, some numerical and theoretical investigations were carried out on mass transfer and particle interactions in particulate flows. In this context, the boundary element method and its combination with the spectral method using the special capabilities of wavelets have been introduced as theefficient numerical methods to solve the governing equations of mass transfer in particulate flows. A theoretical formulation of fluid dispersivity in suspension flows revealed that the fluid dispersivity depends upon the fluid properties and particle parameters as well as the fluid-particle and particle-particle interactions.
Resumo:
Neuronal dynamics are fundamentally constrained by the underlying structural network architecture, yet much of the details of this synaptic connectivity are still unknown even in neuronal cultures in vitro. Here we extend a previous approach based on information theory, the Generalized Transfer Entropy, to the reconstruction of connectivity of simulated neuronal networks of both excitatory and inhibitory neurons. We show that, due to the model-free nature of the developed measure, both kinds of connections can be reliably inferred if the average firing rate between synchronous burst events exceeds a small minimum frequency. Furthermore, we suggest, based on systematic simulations, that even lower spontaneous inter-burst rates could be raised to meet the requirements of our reconstruction algorithm by applying a weak spatially homogeneous stimulation to the entire network. By combining multiple recordings of the same in silico network before and after pharmacologically blocking inhibitory synaptic transmission, we show then how it becomes possible to infer with high confidence the excitatory or inhibitory nature of each individual neuron.
Resumo:
Graph theory has provided a key mathematical framework to analyse the architecture of human brain networks. This architecture embodies an inherently complex relationship between connection topology, the spatial arrangement of network elements, and the resulting network cost and functional performance. An exploration of these interacting factors and driving forces may reveal salient network features that are critically important for shaping and constraining the brain's topological organization and its evolvability. Several studies have pointed to an economic balance between network cost and network efficiency with networks organized in an 'economical' small-world favouring high communication efficiency at a low wiring cost. In this study, we define and explore a network morphospace in order to characterize different aspects of communication efficiency in human brain networks. Using a multi-objective evolutionary approach that approximates a Pareto-optimal set within the morphospace, we investigate the capacity of anatomical brain networks to evolve towards topologies that exhibit optimal information processing features while preserving network cost. This approach allows us to investigate network topologies that emerge under specific selection pressures, thus providing some insight into the selectional forces that may have shaped the network architecture of existing human brains.
Resumo:
A sign of presence in virtual environments is that people respond to situations and events as if they were real, where response may be considered at many different levels, ranging from unconscious physiological responses through to overt behavior,emotions, and thoughts. In this paper we consider two responses that gave different indications of the onset of presence in a gradually forming environment. Two aspects of the response of people to an immersive virtual environment were recorded: their eye scanpath, and their skin conductance response (SCR). The scenario was formed over a period of 2 min, by introducing an increasing number of its polygons in random order in a head-tracked head-mounted display. For one group of experimental participants (n 8) the environment formed into one in which they found themselves standing on top of a 3 m high column. For a second group of participants (n 6) the environment was otherwise the same except that the column was only 1 cm high, so that they would be standing at normal ground level. For a third group of participants (n 14) the polygons never formed into a meaningful environment. The participants who stood on top of the tall column exhibited a significant decrease in entropy of the eye scanpath and an increase in the number of SCR by 99 s into the scenario, at a time when only 65% of the polygons had been displayed. The ground level participants exhibited a similar decrease in scanpath entropy, but not the increase in SCR. The random scenario grouping did not exhibit this decrease in eye scanpath entropy. A drop in scanpath entropy indicates that the environment had cohered into a meaningful perception. An increase in the rate of SCR indicates the perception of an aversive stimulus. These results suggest that on these two dimensions (scanpath entropy and rate of SCR) participants were responding realistically to the scenario shown in the virtual environment. In addition, the response occurred well before the entire scenario had been displayed, suggesting that once a set of minimal cues exists within a scenario,it is enough to form a meaningful perception. Moreover, at the level of the sympathetic nervous system, the participants who were standing on top of the column exhibited arousal as if their experience might be real. This is an important practical aspect of the concept of presence.
Resumo:
La contaminació acústica és un dels problemes que en la actualitat ha agafat una important rellevància en la societat. Aquest fet és a causa de que aquesta contaminació provoca estrès i problemes de salut que poden arribar a ser greus. A més, la societat demana mantenir totes les comoditats disminuint l’inconvenient que podria suposar la contaminació acústica. En les grans ciutats, la contaminació acústica prové, segons molts estudis, del trànsit rodat en un 80%, fet que col•loca aquesta font com a la més important.L’objecte del projecte és desenvolupar un catàleg pels tipus de sòls més comuns sobre quins són els coeficients d’absorció més adequats i avaluar l’error que es dóna en els càlculs de propagació a llargues distàncies, que en ambients urbans no es poden donar i per tant no solen ser un problema. L’abast del projecte comprendrà sòls plans, tant de tipus reflectants com absorbents que típicament es poden trobar al voltant de les infraestructures de transport i distàncies de propagació de 25 metres fins als 150 metres
Resumo:
One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. We have developed an index of the aggregation of moving individuals in a flock and have provided an example of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock.
Resumo:
We present ACACIA, an agent-based program implemented in Java StarLogo 2.0 that simulates a two-dimensional microworld populated by agents, obstacles and goals. Our program simulates how agents can reach long-term goals by following sensorial-motor couplings (SMCs) that control how the agents interact with their environment and other agents through a process of local categorization. Thus, while acting in accordance with this set of SMCs, the agents reach their goals through the emergence of global behaviors. This agent-based simulation program would allow us to understand some psychological processes such as planning behavior from the point of view that the complexity of these processes is the result of agent-environment interaction.
Resumo:
Our body schema gives the subjective impression of being highly stable. However, a number of easily-evoked illusions illustrate its remarkable malleability. In the rubber-hand illusion, illusory ownership of a rubber-hand is evoked by synchronous visual and tactile stimulation on a visible rubber arm and on the hidden real arm. Ownership is concurrent with a proprioceptive illusion of displacement of the arm position towards the fake arm. We have previously shown that this illusion of ownership plus the proprioceptive displacement also occurs towards a virtual 3D projection of an arm when the appropriate synchronous visuotactile stimulation is provided. Our objective here was to explore whether these illusions (ownership and proprioceptive displacement) can be induced by only synchronous visuomotor stimulation, in the absence of tactile stimulation.
Resumo:
Langevin Equations of Ginzburg-Landau form, with multiplicative noise, are proposed to study the effects of fluctuations in domain growth. These equations are derived from a coarse-grained methodology. The Cahn-Hiliard-Cook linear stability analysis predicts some effects in the transitory regime. We also derive numerical algorithms for the computer simulation of these equations. The numerical results corroborate the analytical predictions of the linear analysis. We also present simulation results for spinodal decomposition at large times.