922 resultados para multi-column process
Resumo:
Aluminium cells involve a range of complex physical processes which act simultaneously to provide a narrow satisfactory operating range. These processes involve electromagnetic fields, coupled with heat transfer and phase change, two phase fluid flow with a range of complexities plus the development of stress in the cell structure. All of these phenomena are coupled in some significant sense and so to provide a comprehensive model of these processes involves their representation simultaneously. Conventionally, aspects of the process have been modeled separately using uncoupled estimates of the effects of the other phenomena; this has enabled the use of standard commercial CFD and FEA tools. In this paper we will describe an approach to the modeling of aluminium cells which describes all the physics simultaneously. This approach uses a finite volume approximation for each of the phenomena and facilitates their interactions directly in the modeling-the complex geometries involved are addressed by using unstructured meshes. The very challenging issues to be overcome in this venture will be outlined and some preliminary results will be shown.
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
This paper introduces systems of exchange values as tools for the organization of multi-agent systems. Systems of exchange values are defined on the basis of the theory of social exchanges, developed by Piaget and Homans. A model of social organization is proposed, where social relations are construed as social exchanges and exchange values are put into use in the support of the continuity of the performance of social exchanges. The dynamics of social organizations is formulated in terms of the regulation of exchanges of values, so that social equilibrium is connected to the continuity of the interactions. The concept of supervisor of social equilibrium is introduced as a centralized mechanism for solving the problem of the equilibrium of the organization The equilibrium supervisor solves such problem making use of a qualitative Markov Decision Process that uses numerical intervals for the representation of exchange values.
Resumo:
LOPES, Jose Soares Batista et al. Application of multivariable control using artificial neural networks in a debutanizer distillation column.In: INTERNATIONAL CONGRESS OF MECHANICAL ENGINEERING - COBEM, 19, 5-9 nov. 2007, Brasilia. Anais... Brasilia, 2007
Resumo:
This chapter examines community media projects in Scotland as social processes that nurture knowledge through participation in production. A visual and media anthropology framework (Ginsburg, 2005) with an emphasis on the social context of media production informs the analysis of community media. Drawing on community media projects in the Govan area of Glasgow and the Isle of Bute, the techniques of production foreground “the relational aspects of filmmaking” (Grimshaw and Ravetz, 2005: 7) and act as a catalyst for knowledge and networks of relations embedded in time and place. Community media is defined here as a creative social process, characterised by an approach to production that is multi-authored, collaborative and informed by the lives of participants, and which recognises the relevance of networks of relations to that practice (Caines, 2007: 2). As a networked process, community media production is recognised as existing in collaboration between a director or producer, such as myself, and organisations, institutions and participants, who are connected through a range of identities, practices and place. These relations born of the production process reflect a complex area of practice and participation that brings together “parallel and overlapping public spheres” (Meadows et al., 2002: 3). This relates to broader concerns with networks (Carpentier, Servaes and Lie, 2003; Rodríguez, 2001), both revealed during the process of production and enhanced by it, and how they can be described with reference to the knowledge practice of community media.
Resumo:
The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.
Resumo:
This research examines the process of placemaking in LeDroit Park, a residential Washington, DC, neighborhood with a historic district at its core. Unpacking the entwined physical and social evolution of the small community within the context of the Nation’s Capital, this analysis provides insight into the role of urban design and development as well as historic designation on shaping collective identity. Initially planned and designed in 1873 as a gated suburb just beyond the formal L’Enfant-designed city boundary, LeDroit Park was intended as a retreat for middle and upper-class European Americans from the growing density and social diversity of the city. With a mixture of large romantic revival mansions and smaller frame cottages set on grassy plots evocative of an idealized rural village, the physical design was intentionally inwardly-focused. This feeling of refuge was underscored with a physical fence that surrounded the development, intended to prevent African Americans from nearby Howard University and the surrounding neighborhood, from using the community’s private streets to access the City of Washington. Within two decades of its founding, LeDroit Park was incorporated into the District of Columbia, the surrounding fence was demolished, and the neighborhood was racially integrated. Due to increasingly stringent segregation laws and customs in the city, this period of integration lasted less than twenty years, and LeDroit Park developed into an elite African American enclave, using the urban design as a bulwark against the indignities of a segregated city. Throughout the 20th century housing infill and construction increased density, yet the neighborhood never lost the feeling of security derived from the neighborhood plan. Highlighting the architecture and street design, neighbors successfully received historic district designation in 1974 in order to halt campus expansion. After a stalemate that lasted two decades, the neighborhood began another period of transformation, both racial and socio-economic, catalyzed by a multi-pronged investment program led by Howard University. Through interviews with long-term and new community members, this investigation asserts that the 140-year development history, including recent physical interventions, is integral to placemaking, shaping the material character as well as the social identity of residents.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. Agent-Based Simulation (ABS), one way of using intelligent agents, carries great potential for progressing our understanding of management practices and how they link to retail performance. We have developed simulation models based on research by a multi-disciplinary team of economists, work psychologists and computer scientists. We will discuss our experiences of implementing these concepts working with a well-known retail department store. There is no doubt that management practices are linked to the performance of an organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have been developed, but when it comes down to the actual application of these guidelines considerable ambiguity remains regarding their effectiveness within particular contexts (Siebers et al., forthcoming a). Most Operational Research (OR) methods can only be used as analysis tools once management practices have been implemented. Often they are not very useful for giving answers to speculative ‘what-if’ questions, particularly when one is interested in the development of the system over time rather than just the state of the system at a certain point in time. Simulation can be used to analyse the operation of dynamic and stochastic systems. ABS is particularly useful when complex interactions between system entities exist, such as autonomous decision making or negotiation. In an ABS model the researcher explicitly describes the decision process of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents and the environment. We will show how ABS experiments can deal with testing and optimising management practices such as training, empowerment or teamwork. Hence, questions such as “will staff setting their own break times improve performance?” can be investigated.
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
Resumo:
Call Level Interfaces (CLI) play a key role in business tiers of relational and on some NoSQL database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI are low level API, this way not addressing high level architectural requirements. Among the examples we emphasize two situations: a) the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and b) the need to automatically adapt business tiers to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). Beyond the reference architecture, this paper presents a proof of concept based on Java and Java Database Connectivity (an example of CLI).
Resumo:
Call Level Interfaces (CLI) are low level API that play a key role in database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI were not designed to address organizational requirements and contextual runtime requirements. Among the examples we emphasize the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and also the need to automatically adapt to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). This paper presents the reference architecture for those components and a proof of concept based on Java and Java Database Connectivity (an example of CLI).
Resumo:
This work describes preliminary results of a two-modality imaging system aimed at the early detection of breast cancer. The first technique is based on compounding conventional echographic images taken at regular angular intervals around the imaged breast. The other modality obtains tomographic images of propagation velocity using the same circular geometry. For this study, a low-cost prototype has been built. It is based on a pair of opposed 128-element, 3.2 MHz array transducers that are mechanically moved around tissue mimicking phantoms. Compounded images around 360 degrees provide improved resolution, clutter reduction, artifact suppression and reinforce the visualization of internal structures. However, refraction at the skin interface must be corrected for an accurate image compounding process. This is achieved by estimation of the interface geometry followed by computing the internal ray paths. On the other hand, sound velocity tomographic images from time of flight projections have been also obtained. Two reconstruction methods, Filtered Back Projection (FBP) and 2D Ordered Subset Expectation Maximization (2D OSEM), were used as a first attempt towards tomographic reconstruction. These methods yield useable images in short computational times that can be considered as initial estimates in subsequent more complex methods of ultrasound image reconstruction. These images may be effective to differentiate malignant and benign masses and are very promising for breast cancer screening. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Background: The Analytic Hierarchy Process (AHP), developed by Saaty in the late 1970s, is one of the methods for multi-criteria decision making. The AHP disaggregates a complex decision problem into different hierarchical levels. The weight for each criterion and alternative are judged in pairwise comparisons and priorities are calculated by the Eigenvector method. The slowly increasing application of the AHP was the motivation for this study to explore the current state of its methodology in the healthcare context. Methods: A systematic literature review was conducted by searching the Pubmed and Web of Science databases for articles with the following keywords in their titles or abstracts: "Analytic Hierarchy Process," "Analytical Hierarchy Process," "multi-criteria decision analysis," "multiple criteria decision," "stated preference," and "pairwise comparison." In addition, we developed reporting criteria to indicate whether the authors reported important aspects and evaluated the resulting studies' reporting. Results: The systematic review resulted in 121 articles. The number of studies applying AHP has increased since 2005. Most studies were from Asia (almost 30 %), followed by the US (25.6 %). On average, the studies used 19.64 criteria throughout their hierarchical levels. Furthermore, we restricted a detailed analysis to those articles published within the last 5 years (n = 69). The mean of participants in these studies were 109, whereas we identified major differences in how the surveys were conducted. The evaluation of reporting showed that the mean of reported elements was about 6.75 out of 10. Thus, 12 out of 69 studies reported less than half of the criteria. Conclusion: The AHP has been applied inconsistently in healthcare research. A minority of studies described all the relevant aspects. Thus, the statements in this review may be biased, as they are restricted to the information available in the papers. Hence, further research is required to discover who should be interviewed and how, how inconsistent answers should be dealt with, and how the outcome and stability of the results should be presented. In addition, we need new insights to determine which target group can best handle the challenges of the AHP. © 2015 Schmidt et al.
Resumo:
Part 18: Optimization in Collaborative Networks