892 resultados para EXPLOITING MULTICOMMUTATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a technique for quantifying and then exploiting uncertainty in nonlinear stochastic control systems. The approach is suboptimal though robust and relies upon the approximation of the forward and inverse plant models by neural networks, which also estimate the intrinsic uncertainty. Sampling from the resulting Gaussian distributions of the inversion based neurocontroller allows us to introduce a control law which is demonstrably more robust than traditional adaptive controllers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the direct adaptive inverse control of nonlinear multivariable systems with different delays between every input-output pair. In direct adaptive inverse control, the inverse mapping is learned from examples of input-output pairs. This makes the obtained controller sub optimal, since the network may have to learn the response of the plant over a larger operational range than necessary. Moreover, in certain applications, the control problem can be redundant, implying that the inverse problem is ill posed. In this paper we propose a new algorithm which allows estimating and exploiting uncertainty in nonlinear multivariable control systems. This approach allows us to model strongly non-Gaussian distribution of control signals as well as processes with hysteresis. The proposed algorithm circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diversity has become an important issue at all levels of the company from the boardroom to the back office. It is increasingly apparent that diversity is vital to productivity, with academic research indicating an important link between diverse top management team (TMT) composition and corporate performance. However, the nature of this link remains elusive, as there is little accessible research that can help top teams to evaluate how diversity impacts on their strategic capacity. This paper seeks to fill this gap by developing a conceptual framework, illustrated with case examples, to explain the relationships between TMT diversity and TMT collective action. As collective action is difficult to attain from top teams that are high in diversity, six practical processes are developed from this framework for establishing and exploiting top team strategic capacity. The paper concludes by outlining the theoretical implications of the framework. © Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traditional paradigm of foreign direct investment (FDI) suggests that FDI is undertaken principally to exploit some firm-specific advantage in a foreign country which provides a locational advantage to the investor. However, recent theoretical work suggests a model of FDI in which the motivation is not to exploit existing technological advantages in a foreign country, but to access such technology and transfer it from the host economy to the investing multinational corporation via spillover effects. This paper tests the technology sourcing versus technology exploiting hypotheses for a panel of sectoral FDI flows between the United States and major OECD nations over a 15 year period. The research makes use of Patel and Vega's (Research Policy, 28, 145-55, 1999) taxonomy of sectors which are likely a priori to exhibit technology sourcing and exploiting behaviour respectively. While there is evidence that FDI flows into the United States are attracted to R and D intensive sectors, very little support is found for the technology sourcing hypothesis either for inward or outward FDI flows. The results suggest that, in aggregate, firm-specific 'ownership' effects remain powerful determinants of FDI flows.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is increasing empirical and theoretical evidence that foreign direct investment (FDI) may be motivated not by the desire to exploit some competitive advantage possessed by multinationals, but to access the technology of host economy firms. Using a panel of FDI flows across OECD countries and manufacturing sectors between 1984 and 1995, we test whether these contrasting motivations influence the effects that FDI has on domestic total factor productivity. The distinction between technology-exploiting FDI (TEFDI) and technology-sourcing FDI (TSFDI) is made using R&D intensity differentials between host and source sectors. The hypothesis that the motivation for FDI has an effect on total factor productivity spillovers is supported: TEFDI has a net positive effect, while TSFDI has a net negative effect. These net effects are explained in terms of the offsetting influences of productivity spillovers and market stealing effects induced by incoming multinationals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We relate the technological and factor price determinants of inward and outward FDI to its potential productivity and labour market effects on both host and home economies. This allows us to distinguish clearly between technology sourcing and technology exploiting FDI, and to identify FDI which is linked to labour cost differentials. We then empirically examine the effects of different types of FDI into and out of the United Kingdom on domestic (i.e. UK) productivity and on the demand for skilled and unskilled labour at the industry level. Inward investment into the UK comes overwhelmingly from sectors and countries which have a technological advantage over the corresponding UK sector. Outward FDI shows a quite different pattern, dominated by investment into foreign sectors which have lower unit labour costs than the UK. We find that different types of FDI have markedly different productivity and labour demand effects, which may in part explain the lack of consensus in the empirical literature on the effects of FDI. Our results also highlight the difficulty for policy makers of simultaneously improving employment and domestic productivity through FDI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AIMS To demonstrate the potential use of in vitro poly(lactic-co-glycolic acid) (PLGA) microparticles in comparison with triamcinolone suspension to aid visualisation of vitreous during anterior and posterior vitrectomy. METHODS PLGA microparticles (diameter 10-60 microm) were fabricated using single and/or double emulsion technique(s) and used untreated or following the surface adsorption of a protein (transglutaminase). Particle size, shape, morphology and surface topography were assessed using scanning electron microscopy (SEM) and compared with a standard triamcinolone suspension. The efficacy of these microparticles to enhance visualisation of vitreous against the triamcinolone suspension was assessed using an in vitro set-up exploiting porcine vitreous. RESULTS Unmodified PLGA microparticles failed to adequately adhere to porcine vitreous and were readily washed out by irrigation. In contrast, modified transglutaminase-coated PLGA microparticles demonstrated a significant improvement in adhesiveness and were comparable to a triamcinolone suspension in their ability to enhance the visualisation of vitreous. This adhesive behaviour also demonstrated selectivity by not binding to the corneal endothelium. CONCLUSION The use of transglutaminase-modified biodegradable PLGA microparticles represents a novel method of visualising vitreous and aiding vitrectomy. This method may provide a distinct alternative for the visualisation of vitreous whilst eliminating the pharmacological effects of triamcinolone acetonide suspension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contrary to the long-received theory of FDI, interest rates or rates of return can motivate foreign direct investment (FDI) in concert with the benefits of direct ownership. Thus, access to investor capital and capital markets is a vital component of the multinational’s competitive market structure. Moreover, multinationals can use their superior financial capacity as a competitive advantage in exploiting FDI opportunities in dynamic markets. They can also mitigate higher levels of foreign business risks under dynamic conditions by shifting more financial risk to creditors in the host economy. Furthermore, the investor’s expectation of foreign business risk necessarily commands a risk premium for exposing their equity to foreign market risk. Multinationals can modify the profit maximization strategy of their foreign subsidiaries to maximize growth or profits to generate this risk premium. In this context, we investigate how foreign subsidiaries manage their capital funding, business risk, and profit strategies with a diverse sample of 8,000 matched parents and foreign subsidiary accounts from multiple industries in 38 countries.We find that interest rates, asset prices, and expectations in capital markets have a significant effect on the capital movements of foreign subsidiaries. We also find that foreign subsidiaries mitigate their exposure to foreign business risk by modifying their capital structure and debt maturity. Further, we show how the operating strategy of foreign subsidiaries affects their preference for growth or profit maximization. We further show that superior shareholder value, which is a vital link for access to capital for funding foreign expansion in open market economies, is achieved through maintaining stability in the rate of growth and good asset utilization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The role of technology management in achieving improved manufacturing performance has been receiving increased attention as enterprises are becoming more exposed to competition from around the world. In the modern market for manufactured goods the demand is now for more product variety, better quality, shorter delivery and greater flexibility, while the financial and environmental cost of resources has become an urgent concern to manufacturing managers. This issue of the International Journal of Technology Management addresses the question of how the diffusion, implementation and management of technology can improve the performance of manufacturing industries. The authors come from a large number of different countries and their contributions cover a wide range of topics within this general theme. Some papers are conceptual, others report on research carried out in a range of different industries including steel production, iron founding, electronics, robotics, machinery, precision engineering, metal working and motor manufacture. In some cases they describe situations in specific countries. Several are based on presentations made at the UK Operations Management Association's Sixth International Conference held at Aston University at which the conference theme was 'Achieving Competitive Edge: Getting Ahead Through Technology and People'. The first two papers deal with questions of advanced manufacturing technology implementation and management. Firstly Beatty describes a three year longitudinal field study carried out in ten Canadian manufacturing companies using CADICAM and CIM systems. Her findings relate to speed of implementation, choice of system type, the role of individuals in implementation, organization and job design. This is followed by a paper by Bessant in which he argues that a more a strategic approach should be taken towards the management of technology in the 1990s and beyond. Also considered in this paper are the capabilities necessary in order to deploy advanced manufacturing technology as a strategic resource and the way such capabilities might be developed within the firm. These two papers, which deal largely with the implementation of hardware, are supplemented by Samson and Sohal's contribution in which they argue that a much wider perspective should be adopted based on a new approach to manufacturing strategy formulation. Technology transfer is the topic of the following two papers. Pohlen again takes the case of advanced manufacturing technology and reports on his research which considers the factors contributing to successful realisation of AMT transfer. The paper by Lee then provides a more detailed account of technology transfer in the foundry industry. Using a case study based on a firm which has implemented a number of transferred innovations a model is illustrated in which the 'performance gap' can be identified and closed. The diffusion of technology is addressed in the next two papers. In the first of these, by Lowe and Sim, the managerial technologies of 'Just in Time' and 'Manufacturing Resource Planning' (or MRP 11) are examined. A study is described from which a number of factors are found to influence the adoption process including, rate of diffusion and size. Dahlin then considers the case of a specific item of hardware technology, the industrial robot. Her paper reviews the history of robot diffusion since the early 1960s and then tries to predict how the industry will develop in the future. The following two papers deal with the future of manufacturing in a more general sense. The future implementation of advanced manufacturing technology is the subject explored by de Haan and Peters who describe the results of their Dutch Delphi forecasting study conducted among a panel of experts including scientists, consultants, users and suppliers of AMT. Busby and Fan then consider a type of organisational model, 'the extended manufacturing enterprise', which would represent a distinct alternative pure market-led and command structures by exploiting the shared knowledge of suppliers and customers. The three country-based papers consider some strategic issues relating manufacturing technology. In a paper based on investigations conducted in China He, Liff and Steward report their findings from strategy analyses carried out in the steel and watch industries with a view to assessing technology needs and organizational change requirements. This is followed by Tang and Nam's paper which examines the case of machinery industry in Korea and its emerging importance as a key sector in the Korean economy. In his paper which focuses on Venezuela, Ernst then considers the particular problem of how this country can address the problem of falling oil revenues. He sees manufacturing as being an important contributor to Venezuela's future economy and proposes a means whereby government and private enterprise can co-operate in development of the manufacturing sector. The last six papers all deal with specific topics relating to the management manufacturing. Firstly Youssef looks at the question of manufacturing flexibility, introducing and testing a conceptual model that relates computer based technologies flexibility. Dangerfield's paper which follows is based on research conducted in the steel industry. He considers the question of scale and proposes a modelling approach determining the plant configuration necessary to meet market demand. Engstrom presents the results of a detailed investigation into the need for reorganising material flow where group assembly of products has been adopted. Sherwood, Guerrier and Dale then report the findings of a study into the effectiveness of Quality Circle implementation. Stillwagon and Burns, consider how manufacturing competitiveness can be improved individual firms by describing how the application of 'human performance engineering' can be used to motivate individual performance as well as to integrate organizational goals. Finally Sohal, Lewis and Samson describe, using a case study example, how just-in-time control can be applied within the context of computer numerically controlled flexible machining lines. The papers in this issue of the International Journal of Technology Management cover a wide range of topics relating to the general question of improving manufacturing performance through the dissemination, implementation and management of technology. Although they differ markedly in content and approach, they have the collective aim addressing the concepts, principles and practices which provide a better understanding the technology of manufacturing and assist in achieving and maintaining a competitive edge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Financial prediction has attracted a lot of interest due to the financial implications that the accurate prediction of financial markets can have. A variety of data driven modellingapproaches have been applied but their performance has produced mixed results. In this study we apply both parametric (neural networks with active neurons) and nonparametric (analog complexing) self-organisingmodelling methods for the daily prediction of the exchangerate market. We also propose acombinedapproach where the parametric and nonparametricself-organising methods are combined sequentially, exploiting the advantages of the individual methods with the aim of improving their performance. The combined method is found to produce promising results and to outperform the individual methods when tested with two exchangerates: the American Dollar and the Deutche Mark against the British Pound.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of sensing devices is one of the instrumentation fields that has grown rapidly in the last decade. Corresponding to the swift advance in the development of microelectronic sensors, optical fibre sensors are widely investigated because of their advantageous properties over the electronics sensors such as their wavelength multiplexing capability and high sensitivity to temperature, pressure, strain, vibration and acoustic emission. Moreover, optical fibre sensors are more attractive than the electronics sensors as they can perform distributed sensing, in terms of covering a reasonably large area using a single piece of fibre. Apart from being a responsive element in the sensing field, optical fibre possesses good assets in generating, distributing, processing and transmitting signals in the future broadband information network. These assets include wide bandwidth, high capacity and low loss that grant mobility and flexibility for wireless access systems. Among these core technologies, the fibre optic signal processing and transmission of optical and radio frequency signals have been the subjects of study in this thesis. Based on the intrinsic properties of single-mode optical fibre, this thesis aims to exploit the fibre characteristics such as thermal sensitivity, birefringence, dispersion and nonlinearity, in the applications of temperature sensing and radio-over-fibre systems. By exploiting the fibre thermal sensitivity, a fully distributed temperature sensing system consisting of an apodised chirped fibre Bragg grating has been implemented. The proposed system has proven to be efficient in characterising grating and providing the information of temperature variation, location and width of the heat source applied in the area under test.To exploit the fibre birefringence, a fibre delay line filter using a single high-birefringence optical fibre structure has been presented. The proposed filter can be reconfigured and programmed by adjusting the input azimuth of launched light, as well as the strength and direction of the applied coupling, to meet the requirements of signal processing for different purposes in microwave photonic and optical filtering applications. To exploit the fibre dispersion and nonlinearity, experimental investigations have been carried out to study their joint effect in high power double-sideband and single-sideband modulated links with the presence of fibre loss. The experimental results have been theoretically verified based on the in-house implementation of the split-step Fourier method applied to the generalised nonlinear Schrödinger equation. Further simulation study on the inter-modulation distortion in two-tone signal transmission has also been presented so as to show the effect of nonlinearity of one channel on the other. In addition to the experimental work, numerical simulations have also been carried out in all the proposed systems, to ensure that all the aspects concerned are comprehensively investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an increasing emphasis on the use of software to control safety critical plants for a wide area of applications. The importance of ensuring the correct operation of such potentially hazardous systems points to an emphasis on the verification of the system relative to a suitably secure specification. However, the process of verification is often made more complex by the concurrency and real-time considerations which are inherent in many applications. A response to this is the use of formal methods for the specification and verification of safety critical control systems. These provide a mathematical representation of a system which permits reasoning about its properties. This thesis investigates the use of the formal method Communicating Sequential Processes (CSP) for the verification of a safety critical control application. CSP is a discrete event based process algebra which has a compositional axiomatic semantics that supports verification by formal proof. The application is an industrial case study which concerns the concurrent control of a real-time high speed mechanism. It is seen from the case study that the axiomatic verification method employed is complex. It requires the user to have a relatively comprehensive understanding of the nature of the proof system and the application. By making a series of observations the thesis notes that CSP possesses the scope to support a more procedural approach to verification in the form of testing. This thesis investigates the technique of testing and proposes the method of Ideal Test Sets. By exploiting the underlying structure of the CSP semantic model it is shown that for certain processes and specifications the obligation of verification can be reduced to that of testing the specification over a finite subset of the behaviours of the process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.