922 resultados para multi-column process
Resumo:
Since its introduction in 1993, the Message Passing Interface (MPI) has become a de facto standard for writing High Performance Computing (HPC) applications on clusters and Massively Parallel Processors (MPPs). The recent emergence of multi-core processor systems presents a new challenge for established parallel programming paradigms, including those based on MPI. This paper presents a new Java messaging system called MPJ Express. Using this system, we exploit multiple levels of parallelism - messaging and threading - to improve application performance on multi-core processors. We refer to our approach as nested parallelism. This MPI-like Java library can support nested parallelism by using Java or Java OpenMP (JOMP) threads within an MPJ Express process. Practicality of this approach is assessed by porting to Java a massively parallel structure formation code from Cosmology called Gadget-2. We introduce nested parallelism in the Java version of the simulation code and report good speed-ups. To the best of our knowledge it is the first time this kind of hybrid parallelism is demonstrated in a high performance Java application. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In industrial practice, constrained steady state optimisation and predictive control are separate, albeit closely related functions within the control hierarchy. This paper presents a method which integrates predictive control with on-line optimisation with economic objectives. A receding horizon optimal control problem is formulated using linear state space models. This optimal control problem is very similar to the one presented in many predictive control formulations, but the main difference is that it includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include the standard quadratic regulatory objective, together with economic objectives which are often linear. Assuming that the system settles to a steady state operating point under receding horizon control, conditions are given for the satisfaction of the necessary optimality conditions of the steady-state optimisation problem. The method is based on adaptive linear state space models, which are obtained by using on-line identification techniques. The use of model adaptation is justified from a theoretical standpoint and its beneficial effects are shown in simulations. The method is tested with simulations of an industrial distillation column and a system of chemical reactors.
Resumo:
Foundation construction process has been an important key point in a successful construction engineering. The frequency of using diaphragm wall construction method among many deep excavation construction methods in Taiwan is the highest in the world. The traditional view of managing diaphragm wall unit in the sequencing of construction activities is to establish each phase of the sequencing of construction activities by heuristics. However, it conflicts final phase of engineering construction with unit construction and effects planning construction time. In order to avoid this kind of situation, we use management of science in the study of diaphragm wall unit construction to formulate multi-objective combinational optimization problem. Because the characteristic (belong to NP-Complete problem) of problem mathematic model is multi-objective and combining explosive, it is advised that using the 2-type Self-Learning Neural Network (SLNN) to solve the N=12, 24, 36 of diaphragm wall unit in the sequencing of construction activities program problem. In order to compare the liability of the results, this study will use random researching method in comparison with the SLNN. It is found that the testing result of SLNN is superior to random researching method in whether solution-quality or Solving-efficiency.
Resumo:
The following paper builds on ongoing discussions over the spatial and territorial turns in planning, as it relates to the dynamics of evidence-based planning and knowledge production in the policy process. It brings this knowledge perspective to the organizational and institutional dynamics of transformational challenges implicit in the recent enlargement of the EU. Thus it explores the development of new spatial ideas and planning approaches, and their potential to shape or ‘frame’ spatial policy through the formulation of new institutional arrangements and the de-institutionalization of others. That is, how knowledge is created, contested, mobilized and controlled across governance architectures or territorial knowledge channels. In so doing, the paper elaborates and discusses a theoretical framework through which the interplay of knowledge and policymaking can be conceptualized and analyzed.
Resumo:
High rates of nutrient loading from agricultural and urban development have resulted in surface water eutrophication and groundwater contamination in regions of Ontario. In Lake Simcoe (Ontario, Canada), anthropogenic nutrient contributions have contributed to increased algal growth, low hypolimnetic oxygen concentrations, and impaired fish reproduction. An ambitious programme has been initiated to reduce phosphorus loads to the lake, aiming to achieve at least a 40% reduction in phosphorus loads by 2045. Achievement of this target necessitates effective remediation strategies, which will rely upon an improved understanding of controls on nutrient export from tributaries of Lake Simcoe as well as improved understanding of the importance of phosphorus cycling within the lake. In this paper, we describe a new model structure for the integrated dynamic and process-based model INCA-P, which allows fully-distributed applications, suited to branched river networks. We demonstrate application of this model to the Black River, a tributary of Lake Simcoe, and use INCA-P to simulate the fluxes of P entering the lake system, apportion phosphorus among different sources in the catchment, and explore future scenarios of land-use change and nutrient management to identify high priority sites for implementation of watershed best management practises.
Resumo:
Literacy as a social practice is integrally linked with social, economic and political institutions and processes. As such, it has a material base which is fundamentally constituted in power relations. Literacy is therefore interwoven with the text and context of everyday living in which multi-levelled meanings are organically produced at both individual and societal level. This paper argues that if language thus mediates social reality, then it follows that literacy defined as a social practice cannot really be addressed as a reified, neutral activity but that it should take account of the social, cultural and political processes in which literacy practices are embedded. Drawing on the work of key writers within the field, the paper foregrounds the primary role of the state in defining the forms and levels of literacy required and made available at particular moments within society. In a case-study of the social construction of literacy meanings in pre-revolutionary Iran, it explores the view that the discourse about societal literacy levels has historically constituted a key terrain in which the struggle for control over meaning has taken place. This struggle, it is argued, sets the interests of the state to maintain ideological and political control over the production of knowledge within the culture and society over and against the needs identified by the individual for personal development, empowerment and liberation. In an overall sense, the paper examines existing theoretical perspectives on societal literacy programmes in terms of the scope that they provide for analyses that encompass the multi-levelled power relations that shape and influence dominant discourses on the relative value of literacy for both the individual and society
Resumo:
Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.
Resumo:
Business and IT alignment is increasingly acknowledged as a key for organisational performance. However, alignment research lack to mechanisms that enable for on-going process with multi-level effects. Multi-level learning allows on-going effectiveness through development of the organisation and improved quality of business and IT strategies. In particular, exploration and exploitation enable effective process of alignment across dynamic multi-level of learning. Hence, this paper proposes a conceptual framework that links multi-level learning and business-IT strategy through the concept of exploration and exploitation, which considers short-term and long-term alignment together to address the challenges of strategic alignment faced in sustaining organisational performance.
Resumo:
In order to shed light on the collective behavior of social insects, we analyzed the behavior of ants from single to multi-body. In an experimental set-up, ants are placed in hemisphere without a nest and food. Trajectory of ants is recorded. From this bottom-up approach, we found that collective behavior of ants as follows: 1. Activity of single ant increases and decreases periodically. 2. Spontaneous meeting process is observed between two ants and meeting spot of two ants is localized in hemisphere. 3. Result on division of labor is obtained between two ants.
Resumo:
A new model has been developed for assessing multiple sources of nitrogen in catchments. The model (INCA) is process based and uses reaction kinetic equations to simulate the principal mechanisms operating. The model allows for plant uptake, surface and sub-surface pathways and can simulate up to six land uses simultaneously. The model can be applied to catchment as a semi-distributed simulation and has an inbuilt multi-reach structure for river systems. Sources of nitrogen can be from atmospheric deposition, from the terrestrial environment (e.g. agriculture, leakage from forest systems etc.), from urban areas or from direct discharges via sewage or intensive farm units. The model is a daily simulation model and can provide information in the form of time series at key sites, or as profiles down river systems or as statistical distributions. The process model is described and in a companion paper the model is applied to the River Tywi catchment in South Wales and the Great Ouse in Bedfordshire.
Resumo:
A general flow process for the multi-step assembly of peptides has been developed and this procedure has been used to successfully construct a series of Boc, Cbz and Fmoc N-protected dipeptides in excellent yields and purities, including an extension of the method to enable the preparation of a tripeptide derivative.
Resumo:
Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Following previous studies, the aim of this work is to further investigate the application of colloidal gas aphrons (CGA) to the recovery of polyphenols from a grape marc ethanolic extract with particular focus on exploring the use of a non-ionic food grade surfactant (Tween 20) as an alternative to the more toxic cationic surfactant CTAB. Different batch separation trials in a flotation column were carried out to evaluate the influence of surfactant type and concentration and processing parameters (such as pH, drainage time, CGA/extract volumetric and molar ratio) on the recovery of total and specific phenolic compounds. The possibility of achieving selective separation and concentration of different classes of phenolic compounds and non-phenolic compounds was also assessed, together with the influence of the process on the antioxidant capacity of the recovered compounds. The process led to good recovery, limited loss of antioxidant capacity, but low selectivity under the tested conditions. Results showed the possibility of using Tween 20 with a separation mechanism mainly driven by hydrophobic interactions. Volumetric ratio rather than the molar ratio was the key operating parameter in the recovery of polyphenols by CGA.
Resumo:
The optimal formulation for the preparation of amaranth flour films plasticized with glycerol and sorbitol was obtained by a multi-response analysis. The optimization aimed to achieve films with higher resistance to break, moderate elongation and lower solubility in water. The influence of plasticizer concentration (Cg, glycerol or Cs, sorbitol) and process temperature (Tp) on the mechanical properties and solubility of the amaranth flour films was initially studied by response surface methodology (RSM). The optimized conditions obtained were Cg 20.02 g glycerol/100 g flour and Tp 75 degrees C, and Cs 29.6 g sorbitol/100 g flour and Tp 75 degrees C. Characterization of the films prepared with these formulations revealed that the optimization methodology employed in this work was satisfactory. Sorbitol was the most suitable plasticizer. It furnished amaranth flour films that were more resistant to break and less permeable to oxygen, due to its greater miscibility with the biopolymers present in the flour and its lower affinity for water. (C) 2011 Elsevier Ltd. All rights reserved.