989 resultados para Strategy execution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strategy execution has been a heated topic in the management world in recent years. However, according to a survey done by the Conference Board (2014), the chief executives are so concerned about the execution in their companies and have rated it as the No.1 or No.2 most challenging issue. Many of them choose to invest in training with a purpose to harvest the most for strategy execution. Therefore, this research is trying to find out a model to design training programs that can at most contribute to the success of strategy execution with three real-life training cases done by BTS Consulting Service. It was found that strategy execution could be greatly supported by training programs that take into consideration the four factors, namely Alignment, Mindset to Change, Capability and Organization Support. Main implications of the findings are presented and discussed. Key

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to demonstrate how key strategic decisions are made in practice at successful FTSE 100 companies. Design/methodology/approach – The paper is based on a semi-structured interview with Ms Cynthia Carroll, Chief Executive of Anglo American plc. Findings – The interview outlines a number of important factors on: the evolution of strategy within Anglo American, strategy execution, leadership at board and executive levels, and capturing synergies within the company. Originality/value – The paper bridges the gap between theory and practice. It provides a practical view and demonstrates how corporate leaders think about key strategic issues

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Whilst target costing and strategic management accounting (SMA) continue to be of considerable interest to academic accountants, both suffer from a relative dearth of empirically based research. Simultaneously, the subject of economic value added (EVA) has also been the subject of little research at the level of the individual firm.The aim of this paper is to contribute to both the management accounting and value based management literatures by analysing how one major European based MNC introduced EVA into its target costing system. The case raises important questions about both the feasibility of cascading EVA down to product level and the compatibility of customer facing versus shareholder focused systems of performance management. We provide preliminary evidence that target costing can be used to align both of these perspectives, and when combined with other SMA techniques it can serve as " the bridge connecting strategy formulation with strategy execution and profit generation" ( Ansari et al., 2007, p. 512). © 2012 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although various strategies have been developed for scheduling parallel applications with independent tasks, very little work exists for scheduling tightly coupled parallel applications on cluster environments. In this paper, we compare four different strategies based on performance models of tightly coupled parallel applications for scheduling the applications on clusters. In addition to algorithms based on existing popular optimization techniques, we also propose a new algorithm called Box Elimination that searches the space of performance model parameters to determine the best schedule of machines. By means of real and simulation experiments, we evaluated the algorithms on single cluster and multi-cluster setups. We show that our Box Elimination algorithm generates up to 80% more efficient schedule than other algorithms. We also show that the execution times of the schedules produced by our algorithm are more robust against the performance modeling errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many meteorological phenomena occur at different locations simultaneously. These phenomena vary temporally and spatially. It is essential to track these multiple phenomena for accurate weather prediction. Efficient analysis require high-resolution simulations which can be conducted by introducing finer resolution nested simulations, nests at the locations of these phenomena. Simultaneous tracking of these multiple weather phenomena requires simultaneous execution of the nests on different subsets of the maximum number of processors for the main weather simulation. Dynamic variation in the number of these nests require efficient processor reallocation strategies. In this paper, we have developed strategies for efficient partitioning and repartitioning of the nests among the processors. As a case study, we consider an application of tracking multiple organized cloud clusters in tropical weather systems. We first present a parallel data analysis algorithm to detect such clouds. We have developed a tree-based hierarchical diffusion method which reallocates processors for the nests such that the redistribution cost is less. We achieve this by a novel tree reorganization approach. We show that our approach exhibits up to 25% lower redistribution cost and 53% lesser hop-bytes than the processor reallocation strategy that does not consider the existing processor allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enhancing the handover process in broadband wireless communication deployment has traditionally motivated many research initiatives. In a high-speed railway domain, the challenge is even greater. Owing to the long distances covered, the mobile node gets involved in a compulsory sequence of handover processes. Consequently, poor performance during the execution of these handover processes significantly degrades the global end-to-end performance. This article proposes a new handover strategy for the railway domain: the RMPA handover, a Reliable Mobility Pattern Aware IEEE 802.16 handover strategy "customized" for a high-speed mobility scenario. The stringent high mobility feature is balanced with three other positive features in a high-speed context: mobility pattern awareness, different sources for location discovery techniques, and a previously known traffic data profile. To the best of the authors' knowledge, there is no IEEE 802.16 handover scheme that simultaneously covers the optimization of the handover process itself and the efficient timing of the handover process. Our strategy covers both areas of research while providing a cost-effective and standards-based solution. To schedule the handover process efficiently, the RMPA strategy makes use of a context aware handover policy; that is, a handover policy based on the mobile node mobility pattern, the time required to perform the handover, the neighboring network conditions, the data traffic profile, the received power signal, and current location and speed information of the train. Our proposal merges all these variables in a cross layer interaction in the handover policy engine. It also enhances the handover process itself by establishing the values for the set of handover configuration parameters and mechanisms of the handover process. RMPA is a cost-effective strategy because compatibility with standards-based equipment is guaranteed. The major contributions of the RMPA handover are in areas that have been left open to the handover designer's discretion. Our simulation analysis validates the RMPA handover decision rules and design choices. Our results supporting a high-demand video application in the uplink stream show a significant improvement in the end-to-end quality of service parameters, including end-to-end delay (22%) and jitter (80%), when compared with a policy based on signal-to-noise-ratio information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Pax Americana and the grand strategy of hegemony (or “Primacy”) that underpins it may be becoming unsustainable. Particularly in the wake of exhausting wars, the Global Financial Crisis, and the shift of wealth from West to East, it may no longer be possible or prudent for the United States to act as the unipolar sheriff or guardian of a world order. But how viable are the alternatives, and what difficulties will these alternatives entail in their design and execution? This analysis offers a sympathetic but critical analysis of alternative U.S. National Security Strategies of “retrenchment” that critics of American diplomacy offer. In these strategies, the United States would anticipate the coming of a more multipolar world and organize its behavior around the dual principles of “concert” and “balance,” seeking a collaborative relationship with other great powers, while being prepared to counterbalance any hostile aggressor that threatens world order. The proponents of such strategies argue that by scaling back its global military presence and its commitments, the United States can trade prestige for security, shift burdens, and attain a more free hand. To support this theory, they often look to the 19th-century concert of Europe as a model of a successful security regime and to general theories about the natural balancing behavior of states. This monograph examines this precedent and measures its usefulness for contemporary statecraft to identify how great power concerts are sustained and how they break down. The project also applies competing theories to how states might behave if world politics are in transition: Will they balance, bandwagon, or hedge? This demonstrates the multiple possible futures that could shape and be shaped by a new strategy. viii A new strategy based on an acceptance of multipolarity and the limits of power is prudent. There is scope for such a shift. The convergence of several trends—including transnational problems needing collaborative efforts, the military advantages of defenders, the reluctance of states to engage in unbridled competition, and hegemony fatigue among the American people—means that an opportunity exists internationally and at home for a shift to a new strategy. But a Concert-Balance strategy will still need to deal with several potential dilemmas. These include the difficulty of reconciling competitive balancing with cooperative concerts, the limits of balancing without a forward-reaching onshore military capability, possible unanticipated consequences such as a rise in regional power competition or the emergence of blocs (such as a Chinese East Asia or an Iranian Gulf), and the challenge of sustaining domestic political support for a strategy that voluntarily abdicates world leadership. These difficulties can be mitigated, but they must be met with pragmatic and gradual implementation as well as elegant theorizing and the need to avoid swapping one ironclad, doctrinaire grand strategy for another.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the challenges presented by the current conjecture in Global Companies is to recognize and understand that the culture and levels in structure of the Power Distance in Organizations in different countries contribute, significantly, toward the failure or success of their strategies. The alignment between the implementation and execution of new strategies for projects intended for the success of the Organization as a whole, rather than as an individual part thereof, is an important step towards reducing the impacts of Power Distance (PDI) on the success of business strategies. A position at odds with this understanding by Companies creates boundaries that increase organizational chasms, also taking into consideration relevant aspects such as, FSAs (Firm-Specific Advantages) and CSAs (Country-Specific Advantages). It is also important that the Organizations based in countries or regions of low Power Distance (PDI) between its individuals be more flexible and prepared to ask and to hear the suggestions from Regional and Local Offices. Thus, the purpose of this study is to highlight the elements of effective strategy implementation considering the relevant aspects at all levels of global corporate culture that justify the influences of power distance when implementing new strategies and also to minimize the impacts of this internal business relationship. This study also recognizes that other corporate and cultural aspects are relevant for the success of business strategies so consider, for instance, the lack of alignment between global and regional/local organizations, the need for competent leadership resources, as well as the challenges that indicate the distance between the hierarchical levels ─ Headquarters and Regional Office ─ as some of the various causes that prevent the successful execution of global strategies. Finally, we show that the execution of the strategy cannot be treated as a construction solely created by the Headquarters or by only one Board and that it needs to be understood as a system aimed at interacting with the surroundings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.