932 resultados para Simulation and Modeling


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Selon la théorie de l’auto-détermination, l’autonomie est un besoin universel de base qui, lorsque soutenu, permet aux individus de mieux fonctionner et de vivre plus de bien-être psychologique (p. ex., Deci & Ryan, 2008). Le style parental des parents qui soutiennent l’autonomie de leur enfant est caractérisé par le soutien du fonctionnement autodéterminé de ce dernier. Sa définition traditionnelle inclut des pratiques telles qu’offrir des explications et des choix lors des requêtes, communiquer de l’empathie, et encourager les prises d’initiatives tout en minimisant l’utilisation d’un langage contrôlant (p. ex., Soenens et al., 2007). Les bénéfices d’un style parental qui soutient l’autonomie d’un enfant ont été bien documentés (p. ex., Grolnick, Deci, & Ryan, 1997), toutefois, peu d’études ont été effectuées auprès des bambins. Or, cette thèse visait à enrichir la littérature sur le « parentage » en explorant les pratiques soutenantes qui sont utilisées par des parents de bambins dans un contexte de socialisation (étude 1), ainsi qu’en examinant les facteurs qui peuvent brimer leur mise en pratique (étude 2). La première étude a examiné un grand nombre de pratiques de socialisation que les parents qui favorisent davantage le soutien à l’autonomie (SA) pourraient utiliser plus fréquemment lorsqu’ils font des demandes à leurs bambins. Cette étude nous a permis d’explorer comment les parents manifestent leur SA et si le SA dans ce type de contexte est associé à un plus grand niveau d’internalisation des règles. Des parents (N = 182) de bambins (M âge = 27.08 mois) ont donc été invités à rapporter la fréquence avec laquelle ils utilisent 26 pratiques potentiellement soutenantes lorsqu’ils demandent à leurs bambins de compléter des tâches importantes mais non intéressantes et de rapporter à quel point ils valorisent le SA. Huit pratiques ont été identifiées comme étant soutenantes: quatre façons de communiquer de l’empathie, donner des explications courtes, expliquer pourquoi la tâche est importante, décrire le problème de façon informative et neutre, et mettre en pratique le comportement désiré soi-même. De plus, l’ensemble des huit pratiques corrélait positivement avec le niveau d’internalisation des bambins, suggérant aussi que celles-ci représentent bien le concept du SA. Des études futures pourraient tenter de répliquer ces résultats dans des contextes potentiellement plus chargés ou ébranlants (p. ex., réagir face à des méfaits, avec des enfants souffrant de retard de développement). La deuxième étude a poursuivi l’exploration du concept du SA parental en examinant les facteurs qui influencent la fréquence d’utilisation des stratégies soutenantes dans des contextes de socialisation. Puisque la littérature suggère que le stress parental et le tempérament difficile des bambins (c.-à-d., plus haut niveau d’affectivité négative, plus faible niveau de contrôle volontaire/autorégulation, plus faible niveau de surgency) comme étant des facteurs de risque potentiels, nous avons exploré de quelle façon ces variables étaient associées à la fréquence d’utilisation des stratégies soutenantes. Les buts de l’étude étaient: (1) d’examiner comment le tempérament des bambins et le stress parental influençaient le SA parental, et (2) de vérifier si le stress parental médiait la relation possible entre le tempérament des bambins et le SA parental. Le même échantillon de parents a été utilisé. Les parents ont été invités à répondre à des questions portant sur le tempérament de leur enfant ainsi que sur leur niveau de stress. Les résultats ont démontré qu’un plus grand niveau d’affectivité négative était associé à un plus grand niveau de stress parental, qui à son tour prédisait moins de SA parental. De plus, le stress parental médiait la relation positive entre l’autorégulation du bambin et le SA parental. Des recherches futures pourraient évaluer des interventions ayant pour but d’aider les parents à préserver leur attitude soutenante durant des contextes de socialisation plus difficiles malgré certaines caractéristiques tempéramentales exigeantes des bambins, en plus du stress qu’ils pourraient vivre au quotidien.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Part 6: Engineering and Implementation of Collaborative Networks

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation focuses on gaining understanding of cell migration and collective behavior through a combination of experiment, analysis, and modeling techniques. Cell migration is a ubiquitous process that plays an important role during embryonic development and wound healing as well as in diseases like cancer, which is a particular focus of this work. As cancer cells become increasingly malignant, they acquire the ability to migrate away from the primary tumor and spread throughout the body to form metastatic tumors. During this process, changes in gene expression and the surrounding tumor environment can lead to changes in cell migration characteristics. In this thesis, I analyze how cells are guided by the texture of their environment and how cells cooperate with their neighbors to move collectively. The emergent properties of collectively moving groups are a particular focus of this work as collective cell dynamics are known to change in diseases such as cancer. The internal machinery for cell migration involves polymerization of the actin cytoskeleton to create protrusions that---in coordination with retraction of the rear of the cell---lead to cell motion. This actin machinery has been previously shown to respond to the topography of the surrounding surface, leading to guided migration of amoeboid cells. Here we show that epithelial cells on nanoscale ridge structures also show changes in the morphology of their cytoskeletons; actin is found to align with the ridge structures. The migration of the cells is also guided preferentially along the ridge length. These ridge structures are on length scales similar to those found in tumor microenvironments and as such provide a system for studying the response of the cells' internal migration machinery to physiologically relevant topographical cues. In addition to sensing surface topography, individual cells can also be influenced by the pushing and pulling of neighboring cells. The emergent properties of collectively migrating cells show interesting dynamics and are relevant for cancer progression, but have been less studied than the motion of individual cells. We use Particle Image Velocimetry (PIV) to extract the motion of a collectively migrating cell sheet from time lapse images. The resulting flow fields allow us to analyze collective behavior over multiple length and time scales. To analyze the connection between individual cell properties and collective migration behavior, we compare experimental flow fields with the migration of simulated cell groups. Our collective migration metrics allow for a quantitative comparison between experimental and simulated results. This comparison shows that tissue-scale decreases in collective behavior can result from changes in individual cell activity without the need to postulate the existence of subpopulations of leader cells or global gradients. In addition to tissue-scale trends in collective behavior, the migration of cell groups includes localized dynamic features such as cell rearrangements. An individual cell may smoothly follow the motion of its neighbors (affine motion) or move in a more individualistic manner (non-affine motion). By decomposing individual motion into both affine and non-affine components, we measure cell rearrangements within a collective sheet. Finally, finite-time Lyapunov exponent (FTLE) values capture the stretching of the flow field and reflect its chaotic character. Applying collective migration analysis techniques to experimental data on both malignant and non-malignant human breast epithelial cells reveals differences in collective behavior that are not found from analyzing migration speeds alone. Non-malignant cells show increased cooperative motion on long time scales whereas malignant cells remain uncooperative as time progresses. Combining multiple analysis techniques also shows that these two cell types differ in their response to a perturbation of cell-cell adhesion through the molecule E-cadherin. Non-malignant MCF10A cells use E-cadherin for short time coordination of collective motion, yet even with decreased E-cadherin expression, the cells remain coordinated over long time scales. In contrast, the migration behavior of malignant and invasive MCF10CA1a cells, which already shows decreased collective dynamics on both time scales, is insensitive to the change in E-cadherin expression.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As unmanned autonomous vehicles (UAVs) are being widely utilized in military and civil applications, concerns are growing about mission safety and how to integrate dierent phases of mission design. One important barrier to a coste ective and timely safety certication process for UAVs is the lack of a systematic approach for bridging the gap between understanding high-level commander/pilot intent and implementation of intent through low-level UAV behaviors. In this thesis we demonstrate an entire systems design process for a representative UAV mission, beginning from an operational concept and requirements and ending with a simulation framework for segments of the mission design, such as path planning and decision making in collision avoidance. In this thesis, we divided this complex system into sub-systems; path planning, collision detection and collision avoidance. We then developed software modules for each sub-system

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation focuses on gaining understanding of cell migration and collective behavior through a combination of experiment, analysis, and modeling techniques. Cell migration is a ubiquitous process that plays an important role during embryonic development and wound healing as well as in diseases like cancer, which is a particular focus of this work. As cancer cells become increasingly malignant, they acquire the ability to migrate away from the primary tumor and spread throughout the body to form metastatic tumors. During this process, changes in gene expression and the surrounding tumor environment can lead to changes in cell migration characteristics. In this thesis, I analyze how cells are guided by the texture of their environment and how cells cooperate with their neighbors to move collectively. The emergent properties of collectively moving groups are a particular focus of this work as collective cell dynamics are known to change in diseases such as cancer. The internal machinery for cell migration involves polymerization of the actin cytoskeleton to create protrusions that---in coordination with retraction of the rear of the cell---lead to cell motion. This actin machinery has been previously shown to respond to the topography of the surrounding surface, leading to guided migration of amoeboid cells. Here we show that epithelial cells on nanoscale ridge structures also show changes in the morphology of their cytoskeletons; actin is found to align with the ridge structures. The migration of the cells is also guided preferentially along the ridge length. These ridge structures are on length scales similar to those found in tumor microenvironments and as such provide a system for studying the response of the cells' internal migration machinery to physiologically relevant topographical cues. In addition to sensing surface topography, individual cells can also be influenced by the pushing and pulling of neighboring cells. The emergent properties of collectively migrating cells show interesting dynamics and are relevant for cancer progression, but have been less studied than the motion of individual cells. We use Particle Image Velocimetry (PIV) to extract the motion of a collectively migrating cell sheet from time lapse images. The resulting flow fields allow us to analyze collective behavior over multiple length and time scales. To analyze the connection between individual cell properties and collective migration behavior, we compare experimental flow fields with the migration of simulated cell groups. Our collective migration metrics allow for a quantitative comparison between experimental and simulated results. This comparison shows that tissue-scale decreases in collective behavior can result from changes in individual cell activity without the need to postulate the existence of subpopulations of leader cells or global gradients. In addition to tissue-scale trends in collective behavior, the migration of cell groups includes localized dynamic features such as cell rearrangements. An individual cell may smoothly follow the motion of its neighbors (affine motion) or move in a more individualistic manner (non-affine motion). By decomposing individual motion into both affine and non-affine components, we measure cell rearrangements within a collective sheet. Finally, finite-time Lyapunov exponent (FTLE) values capture the stretching of the flow field and reflect its chaotic character. Applying collective migration analysis techniques to experimental data on both malignant and non-malignant human breast epithelial cells reveals differences in collective behavior that are not found from analyzing migration speeds alone. Non-malignant cells show increased cooperative motion on long time scales whereas malignant cells remain uncooperative as time progresses. Combining multiple analysis techniques also shows that these two cell types differ in their response to a perturbation of cell-cell adhesion through the molecule E-cadherin. Non-malignant MCF10A cells use E-cadherin for short time coordination of collective motion, yet even with decreased E-cadherin expression, the cells remain coordinated over long time scales. In contrast, the migration behavior of malignant and invasive MCF10CA1a cells, which already shows decreased collective dynamics on both time scales, is insensitive to the change in E-cadherin expression.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Steam injection is the most used method of additional recovery for the extraction of heavy oil. In this type procedure is common to happen gravitational segregation and this phenomenon can affect the production of oil and therefore, it shoulds be considered in the projects of continuous steam injection. For many years, the gravitational segregation was not adequately considered in the calculation procedures in Reservoir Engineering. The effect of the gravity causes the segregation of fluids inside the porous media according to their densities. The results of simulation arising from reservoirs could provide the ability to deal with the gravity, and it became apparent that the effects of the gravity could significantly affect the performance of the reservoir. It know that the gravitational segregation can happen in almost every case where there is injection of light fluid, specially the steam, and occurs with greater intensity for viscous oil reservoirs. This work discusses the influence of some parameters of the rock-reservoir in segregation as viscosity, permeability, thickness, cover gas, porosity. From a model that shows the phenomenon with greater intensity, optimized some operational parameters as the rate flow rate steam, distance between the wells injector-producer, and interval of completion which contributed to the reduction in gravity override, thus increasing the oil recovery. It was shown a greater technical-economic viability for the model of distance between the wells 100 m. The analysis was performed using the simulator of CMG (Computer Modeling Group-Stars 2007.11, in which was observed by iterating between studied variables in heavy oil reservoirs with similar characteristics to Brazilian Northeast

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the main activities in the petroleum engineering is to estimate the oil production in the existing oil reserves. The calculation of these reserves is crucial to determine the economical feasibility of your explotation. Currently, the petroleum industry is facing problems to analyze production due to the exponentially increasing amount of data provided by the production facilities. Conventional reservoir modeling techniques like numerical reservoir simulation and visualization were well developed and are available. This work proposes intelligent methods, like artificial neural networks, to predict the oil production and compare the results with the ones obtained by the numerical simulation, method quite a lot used in the practice to realization of the oil production prediction behavior. The artificial neural networks will be used due your learning, adaptation and interpolation capabilities

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The occurrence of heavy oil reservoirs have increased substantially and, due to the high viscosity characteristic of this type of oil, conventional recovery methods can not be applied. Thermal methods have been studied for the recovery of this type of oil, with a main objective to reduce its viscosity, by increasing the reservoir temperature, favoring the mobility of the oil and allowing an increasing in the productivity rate of the fields. In situ combustion (ISC) is a thermal recovery method in which heat is produced inside the reservoir by the combustion of part of the oil with injected oxygen, contrasting with the injection of fluid that is heated in the surface for subsequent injection, which leads to loss heat during the trajectory to the reservoir. The ISC is a favorable method for recovery of heavy oil, but it is still difficult to be field implemented. This work had as an objective the parametric analysis of ISC process applied to a semi-synthetic reservoir with characteristics of the Brazilian Northeast reservoirs using vertical production and vertical injection wells, as the air flow injection and the wells completions. For the analysis, was used a commercial program for simulation of oil reservoirs using thermal processes, called Steam, Thermal and Advanced Processes Reservoir Simulator (STARS) from Computer Modelling Group (CMG). From the results it was possible to analyze the efficiency of the ISC process in heavy oil reservoirs by increasing the reservoir temperature, providing a large decrease in oil viscosity, increasing its mobility inside the reservoir, as well as the improvement in the quality of this oil and therefore increasing significantly its recovered fraction. Among the analyzed parameters, the flow rate of air injection was the one which had greater influence in ISC, obtaining higher recovery factor the higher is the flow rate of injection, due to the greater amount of oxygen while ensuring the maintenance of the combustion front

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many of hydrocarbon reserves existing in the world are formed by heavy oils (°API between 10 and 20). Moreover, several heavy oil fields are mature and, thus, offer great challenges for oil industry. Among the thermal methods used to recover these resources, steamflooding has been the main economically viable alternative. Latent heat carried by steam heats the reservoir, reducing oil viscosity and facilitating the production. This method has many variations and has been studied both theoretically and experimentally (in pilot projects and in full field applications). In order to increase oil recovery and reduce steam injection costs, the injection of alternative fluid has been used on three main ways: alternately, co-injected with steam and after steam injection interruption. The main objective of these injection systems is to reduce the amount of heat supplied to the reservoir, using cheaper fluids and maintaining the same oil production levels. This works discusses the use of carbon dioxide, nitrogen, methane and water as an alternative fluid to the steam. The analyzed parameters were oil recoveries and net cumulative oil productions. The reservoir simulation model corresponds to an oil reservoir of 100 m x 100 m x 28 m size, on a Cartesian coordinates system (x, y and z directions). It is a semi synthetic model with some reservoir data similar to those found in Brazilian Potiguar Basin. All studied cases were done using the simulator STARS from CMG (Computer Modelling Group, version 2009.10). It was found that waterflood after steam injection interruption achieved the highest net cumulative oil compared to other fluids injection. Moreover, it was observed that steam and alternative fluids, co-injected and alternately, did not present increase on profitability project compared with steamflooding

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ephemeral Computation (Eph-C) is a newly created computation paradigm, the purpose of which is to take advantage of the ephemeral nature (limited lifetime) of computational resources. First we speak of this new paradigm in general terms, then more specifically in terms of videogame development. We present possible applications and benefits for the main research fields associated with videogame development. This is a preliminary work which aims to investigate the possibilities of applying ephemeral computation to the products of the videogame industry. Therefore, as a preliminary work, it attempts to serve as the inspiration for other researchers or videogame developers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Planning, navigation, and search are fundamental human cognitive abilities central to spatial problem solving in search and rescue, law enforcement, and military operations. Despite a wealth of literature concerning naturalistic spatial problem solving in animals, literature on naturalistic spatial problem solving in humans is comparatively lacking and generally conducted by separate camps among which there is little crosstalk. Addressing this deficiency will allow us to predict spatial decision making in operational environments, and understand the factors leading to those decisions. The present dissertation is comprised of two related efforts, (1) a set of empirical research studies intended to identify characteristics of planning, execution, and memory in naturalistic spatial problem solving tasks, and (2) a computational modeling effort to develop a model of naturalistic spatial problem solving. The results of the behavioral studies indicate that problem space hierarchical representations are linear in shape, and that human solutions are produced according to multiple optimization criteria. The Mixed Criteria Model presented in this dissertation accounts for global and local human performance in a traditional and naturalistic Traveling Salesman Problem. The results of the empirical and modeling efforts hold implications for basic and applied science in domains such as problem solving, operations research, human-computer interaction, and artificial intelligence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Developing Cyber-Physical Systems requires methods and tools to support simulation and verification of hybrid (both continuous and discrete) models. The Acumen modeling and simulation language is an open source testbed for exploring the design space of what rigorousbut- practical next-generation tools can deliver to developers of Cyber- Physical Systems. Like verification tools, a design goal for Acumen is to provide rigorous results. Like simulation tools, it aims to be intuitive, practical, and scalable. However, it is far from evident whether these two goals can be achieved simultaneously. This paper explains the primary design goals for Acumen, the core challenges that must be addressed in order to achieve these goals, the “agile research method” taken by the project, the steps taken to realize these goals, the key lessons learned, and the emerging language design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fluids are important because of their preponderance in our lives. Fluid mechanics touches almost every aspect of our daily lives, and it plays a central role in many branches of science and technology. Therefore, it is a challenging and exciting field of scientific activity due to the complexity of the subject studied and the breadth of the applications. The quest for advances in fluid mechanics, as in other scientific fields, emerge from analytical, computational (CFD) and experimental studies. The improvement in our ability to describe, predict and control the phenomena played (and plays) key roles in the technological breakthroughs. The present theme issue of “Fluid and Heat Flow: Simulation and Optimization” collects a selection of papers. selection of papers presented at Special Session “Fluid Flow, Energy Transfer and Design”