994 resultados para Parallel mechanics
Resumo:
So far in this book, we have seen a large number of methods for generating content for existing games. So, if you have a game already, you could now generate many things for it: maps, levels, terrain, vegetation, weapons, dungeons, racing tracks. But what if you don’t already have a game, and want to generate the game itself? What would you generate, and how? At the heart of any game are its rules. This chapter will discuss representations for game rules of different kinds, along with methods to generate them, and evaluation functions and constraints that help us judge complete games rather than just isolated content artefacts. Our main focus here will be on methods for generating interesting, fun, and/or balanced game rules. However, an important perspective that will permeate the chapter is that game rule encodings and evaluation functions can encode game design expertise and style, and thus help us understand game design. By formalising aspects of the game rules, we define a space of possible rules more precisely than could be done through writing about rules in qualitative terms; and by choosing which aspects of the rules to formalise, we define what aspects of the game are interesting to explore and introduce variation in. In this way, each game generator can be thought of an executable micro-theory of game design, though often a simplified, and sometimes even a caricatured one
Resumo:
Biological systems are typically complex and adaptive, involving large numbers of entities, or organisms, and many-layered interactions between these. System behaviour evolves over time, and typically benefits from previous experience by retaining memory of previous events. Given the dynamic nature of these phenomena, it is non-trivial to provide a comprehensive description of complex adaptive systems and, in particular, to define the importance and contribution of low-level unsupervised interactions to the overall evolution process. In this chapter, the authors focus on the application of the agent-based paradigm in the context of the immune response to HIV. Explicit implementation of lymph nodes and the associated lymph network, including lymphatic chain structure, is a key objective, and requires parallelisation of the model. Steps taken towards an optimal communication strategy are detailed.
Resumo:
Understanding the dynamics of disease spread is essential in contexts such as estimating load on medical services, as well as risk assessment and interven- tion policies against large-scale epidemic outbreaks. However, most of the information is available after the outbreak itself, and preemptive assessment is far from trivial. Here, we report on an agent-based model developed to investigate such epidemic events in a stylised urban environment. For most diseases, infection of a new individual may occur from casual contact in crowds as well as from repeated interactions with social partners such as work colleagues or family members. Our model therefore accounts for these two phenomena. Given the scale of the system, efficient parallel computing is required. In this presentation, we focus on aspects related to paralllelisation for large networks generation and massively multi-agent simulations.
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real-time, using corners as object tokens. Corners are detected using the Harris corner detector, and local image-plane constraints are employed to solve the correspondence problem. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. Tracking is performed without the use of any 3-dimensional motion model. The technique is novel in that, unlike traditional feature-tracking algorithms where feature detection and tracking is carried out over the entire image-plane, here it is restricted to those areas most likely to contain-meaningful image structure. Two distinct types of instantiation regions are identified, these being the “focus-of-expansion” region and “border” regions of the image-plane. The size and location of these regions are defined from a combination of odometry information and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Implementation of the algorithm using T800 Transputers has shown that near-linear speedups are achievable, and that real-time operation is possible (half-video rate has been achieved using 30 processing elements).
Resumo:
Fast restoration of critical loads and non-black-start generators can significantly reduce the economic losses caused by power system blackouts. In a parallel power system restoration scenario, the sectionalization of restoration subsystems plays a very important role in determining the pickup of critical loads before synchronization. Most existing research mainly focuses on the startup of non-black-start generators. The restoration of critical loads, especially the loads with cold load characteristics, has not yet been addressed in optimizing the subsystem divisions. As a result, sectionalized restoration subsystems cannot achieve the best coordination between the pickup of loads and the ramping of generators. In order to generate sectionalizing strategies considering the pickup of critical loads in parallel power system restoration scenarios, an optimization model considering power system constraints, the characteristics of the cold load pickup and the features of generator startup is proposed in this paper. A bi-level programming approach is employed to solve the proposed sectionalizing model. In the upper level the optimal sectionalizing problem for the restoration subsystems is addressed, while in the lower level the objective is to minimize the outage durations of critical loads. The proposed sectionalizing model has been validated by the New-England 39-bus system and the IEEE 118-bus system. Further comparisons with some existing methods are carried out as well.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
The unsteady incompressible viscous fluid flow between two parallel infinite disks which are located at a distance h(t*) at time t* has been studied. The upper disk moves towards the lower disk with velocity h'(t*). The lower disk is porous and rotates with angular velocity Omega(t*). A magnetic field B(t*) is applied perpendicular to the two disks. It has been found that the governing Navier-Stokes equations reduce to a set of ordinary differential equations if h(t*), a(t*) and B(t*) vary with time t* in a particular manner, i.e. h(t*) = H(1 - alpha t*)(1/2), Omega(t*) = Omega(0)(1 - alpha t*)(-1), B(t*) = B-0(1 - alpha t*)(-1/2). These ordinary differential equations have been solved numerically using a shooting method. For small Reynolds numbers, analytical solutions have been obtained using a regular perturbation technique. The effects of squeeze Reynolds numbers, Hartmann number and rotation of the disk on the flow pattern, normal force or load and torque have been studied in detail
Diffraction Of Elastic Waves By Two Parallel Rigid Strips Embedded In An Infinite Orthotropic Medium
Resumo:
The elastodynamic response of a pair of parallel rigid strips embedded in an infinite orthotropic medium due to elastic waves incident normally on the strips has been investigated. The mixed boundary value problem has been solved by the Integral Equation method. The normal stress and the vertical displacement have been derived in closed form. Numerical values of stress intensity factors at inner and outer edges of the strips and vertical displacement at points in the plane of the strips for several orthotropic materials have been calculated and plotted graphically to show the effect of material orthotropy.
Resumo:
Due to its remarkable mechanical and biological properties, there is considerable interest in understanding, and replicating, spider silk's stress-processing mechanisms and structure-function relationships. Here, we investigate the role of water in the nanoscale mechanics of the different regions in the spider silk fibre, and their relative contributions to stress processing. We propose that the inner core region, rich in spidroin II, retains water due to its inherent disorder, thereby providing a mechanism to dissipate energy as it breaks a sacrificial amide-water bond and gains order under strain, forming a stronger amide-amide bond. The spidroin I-rich outer core is more ordered under ambient conditions and is inherently stiffer and stronger, yet does not on its own provide high toughness. The markedly different interactions of the two proteins with water, and their distribution across the fibre, produce a stiffness differential and provide a balance between stiffness, strength and toughness under ambient conditions. Under wet conditions, this balance is destroyed as the stiff outer core material reverts to the behaviour of the inner core.
Resumo:
We consider the problem of deciding whether the output of a boolean circuit is determined by a partial assignment to its inputs. This problem is easily shown to be hard, i.e., co-Image Image -complete. However, many of the consequences of a partial input assignment may be determined in linear time, by iterating the following step: if we know the values of some inputs to a gate, we can deduce the values of some outputs of that gate. This process of iteratively deducing some of the consequences of a partial assignment is called propagation. This paper explores the parallel complexity of propagation, i.e., the complexity of determining whether the output of a given boolean circuit is determined by propagating a given partial input assignment. We give a complete classification of the problem into those cases that are Image -complete and those that are unlikely to be Image complete.
Resumo:
This paper presents a novel three-dimensional hybrid smoothed finite element method (H-SFEM) for solid mechanics problems. In 3D H-SFEM, the strain field is assumed to be the weighted average between compatible strains from the finite element method (FEM) and smoothed strains from the node-based smoothed FEM with a parameter α equipped into H-SFEM. By adjusting α, the upper and lower bound solutions in the strain energy norm and eigenfrequencies can always be obtained. The optimized α value in 3D H-SFEM using a tetrahedron mesh possesses a close-to-exact stiffness of the continuous system, and produces ultra-accurate solutions in terms of displacement, strain energy and eigenfrequencies in the linear and nonlinear problems. The novel domain-based selective scheme is proposed leading to a combined selective H-SFEM model that is immune from volumetric locking and hence works well for nearly incompressible materials. The proposed 3D H-SFEM is an innovative and unique numerical method with its distinct features, which has great potential in the successful application for solid mechanics problems.
Resumo:
We propose a new scheme for the use of constraints in setting up classical, Hamiltonian, relativistic, interacting particle theories. We show that it possesses both Poincaré invariance and invariance of world lines. We discuss the transition to the physical phase space and the nonrelativistic limit.
Resumo:
The paper presents two new algorithms for the direct parallel solution of systems of linear equations. The algorithms employ a novel recursive doubling technique to obtain solutions to an nth-order system in n steps with no more than 2n(n −1) processors. Comparing their performance with the Gaussian elimination algorithm (GE), we show that they are almost 100% faster than the latter. This speedup is achieved by dispensing with all the computation involved in the back-substitution phase of GE. It is also shown that the new algorithms exhibit error characteristics which are superior to GE. An n(n + 1) systolic array structure is proposed for the implementation of the new algorithms. We show that complete solutions can be obtained, through these single-phase solution methods, in 5n−log2n−4 computational steps, without the need for intermediate I/O operations.