18 resultados para Elementary shortest path with resource constraints
em Aston University Research Archive
Resumo:
In machine learning, Gaussian process latent variable model (GP-LVM) has been extensively applied in the field of unsupervised dimensionality reduction. When some supervised information, e.g., pairwise constraints or labels of the data, is available, the traditional GP-LVM cannot directly utilize such supervised information to improve the performance of dimensionality reduction. In this case, it is necessary to modify the traditional GP-LVM to make it capable of handing the supervised or semi-supervised learning tasks. For this purpose, we propose a new semi-supervised GP-LVM framework under the pairwise constraints. Through transferring the pairwise constraints in the observed space to the latent space, the constrained priori information on the latent variables can be obtained. Under this constrained priori, the latent variables are optimized by the maximum a posteriori (MAP) algorithm. The effectiveness of the proposed algorithm is demonstrated with experiments on a variety of data sets. © 2010 Elsevier B.V.
Resumo:
As microblog services such as Twitter become a fast and convenient communication approach, identification of trendy topics in microblog services has great academic and business value. However detecting trendy topics is very challenging due to huge number of users and short-text posts in microblog diffusion networks. In this paper we introduce a trendy topics detection system under computation and communication resource constraints. In stark contrast to retrieving and processing the whole microblog contents, we develop an idea of selecting a small set of microblog users and processing their posts to achieve an overall acceptable trendy topic coverage, without exceeding resource budget for detection. We formulate the selection operation of these subset users as mixed-integer optimization problems, and develop heuristic algorithms to compute their approximate solutions. The proposed system is evaluated with real-time test data retrieved from Sina Weibo, the dominant microblog service provider in China. It's shown that by monitoring 500 out of 1.6 million microblog users and tracking their microposts (about 15,000 daily) with our system, nearly 65% trendy topics can be detected, while on average 5 hours earlier before they appear in Sina Weibo official trends.
Resumo:
The re-entrant flow shop scheduling problem (RFSP) is regarded as a NP-hard problem and attracted the attention of both researchers and industry. Current approach attempts to minimize the makespan of RFSP without considering the interdependency between the resource constraints and the re-entrant probability. This paper proposed Multi-level genetic algorithm (GA) by including the co-related re-entrant possibility and production mode in multi-level chromosome encoding. Repair operator is incorporated in the Multi-level genetic algorithm so as to revise the infeasible solution by resolving the resource conflict. With the objective of minimizing the makespan, Multi-level genetic algorithm (GA) is proposed and ANOVA is used to fine tune the parameter setting of GA. The experiment shows that the proposed approach is more effective to find the near-optimal schedule than the simulated annealing algorithm for both small-size problem and large-size problem. © 2013 Published by Elsevier Ltd.
Resumo:
Many local authorities (LAs) are currently working to reduce both greenhouse gas emissions and the amount of municipal solid waste (MSW) sent to landfill. The recovery of energy from waste (EfW) can assist in meeting both of these objectives. The choice of an EfW policy combines spatial and non-spatial decisions which may be handled using Multi-Criteria Analysis (MCA) and Geographic Information Systems (GIS). This paper addresses the impact of transporting MSW to EfW facilities, analysed as part of a larger decision support system designed to make an overall policy assessment of centralised (large-scale) and distributed (local-scale) approaches. Custom-written ArcMap extensions are used to compare centralised versus distributed approaches, using shortest-path routing based on expected road speed. Results are intersected with 1-kilometre grids and census geographies for meaningful maps of cumulative impact. Case studies are described for two counties in the United Kingdom (UK); Cornwall and Warwickshire. For both case study areas, centralised scenarios generate more traffic, fuel costs and emitted carbon per tonne of MSW processed.
Resumo:
This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.
Resumo:
In this paper, we address this policy issue using a stylised methodology that relies on estimates of the cash flow sensitivity of firms’ investment, as well as a relatively new methodology that enables us to generate a (0, 1) bounded measure of investment efficiency of firms, i.e., the efficiency with which firms can convert their sales into investment, after controlling for unobserved year- and industry-specific effects. Higher investment efficiency is associated with lower financing constraint. Our results indicate that there is considerable heterogeneity in investment efficiency across firms, during a given year; the range being 0.57-0.82. However, the average investment efficiency measure is similar across years, regions and NACE 2-digit industries. We also do not find discernible patterns in the relationship between investment efficiency and firm size, both before and during the financial crisis. The results suggest that while some firms are clearly less efficient at translating their performance into investment, broad policies targeting firms of a certain size, or those within a particular industry or region, may not successfully address the problem of financing constraint in the United Kingdom. The targeting of firms with financing constraints may have to be considerably more refined, and look at not easily observable factors such as credit history/events and organisational capacity of the firms.
Resumo:
In this paper, we investigate the use of manifold learning techniques to enhance the separation properties of standard graph kernels. The idea stems from the observation that when we perform multidimensional scaling on the distance matrices extracted from the kernels, the resulting data tends to be clustered along a curve that wraps around the embedding space, a behavior that suggests that long range distances are not estimated accurately, resulting in an increased curvature of the embedding space. Hence, we propose to use a number of manifold learning techniques to compute a low-dimensional embedding of the graphs in an attempt to unfold the embedding manifold, and increase the class separation. We perform an extensive experimental evaluation on a number of standard graph datasets using the shortest-path (Borgwardt and Kriegel, 2005), graphlet (Shervashidze et al., 2009), random walk (Kashima et al., 2003) and Weisfeiler-Lehman (Shervashidze et al., 2011) kernels. We observe the most significant improvement in the case of the graphlet kernel, which fits with the observation that neglecting the locational information of the substructures leads to a stronger curvature of the embedding manifold. On the other hand, the Weisfeiler-Lehman kernel partially mitigates the locality problem by using the node labels information, and thus does not clearly benefit from the manifold learning. Interestingly, our experiments also show that the unfolding of the space seems to reduce the performance gap between the examined kernels.
Resumo:
With the eye-catching advances in sensing technologies, smart water networks have been attracting immense research interest in recent years. One of the most overarching tasks in smart water network management is the reduction of water loss (such as leaks and bursts in a pipe network). In this paper, we propose an efficient scheme to position water loss event based on water network topology. The state-of-the-art approach to this problem, however, utilizes the limited topology information of the water network, that is, only one single shortest path between two sensor locations. Consequently, the accuracy of positioning water loss events is still less desirable. To resolve this problem, our scheme consists of two key ingredients: First, we design a novel graph topology-based measure, which can recursively quantify the "average distances" for all pairs of senor locations simultaneously in a water network. This measure will substantially improve the accuracy of our positioning strategy, by capturing the entire water network topology information between every two sensor locations, yet without any sacrifice of computational efficiency. Then, we devise an efficient search algorithm that combines the "average distances" with the difference in the arrival times of the pressure variations detected at sensor locations. The viable experimental evaluations on real-world test bed (WaterWiSe@SG) demonstrate that our proposed positioning scheme can identify water loss event more accurately than the best-known competitor.
Resumo:
Models of visual motion processing that introduce priors for low speed through Bayesian computations are sometimes treated with scepticism by empirical researchers because of the convenient way in which parameters of the Bayesian priors have been chosen. Using the effects of motion adaptation on motion perception to illustrate, we show that the Bayesian prior, far from being convenient, may be estimated on-line and therefore represents a useful tool by which visual motion processes may be optimized in order to extract the motion signals commonly encountered in every day experience. The prescription for optimization, when combined with system constraints on the transmission of visual information, may lead to an exaggeration of perceptual bias through the process of adaptation. Our approach extends the Bayesian model of visual motion proposed byWeiss et al. [Weiss Y., Simoncelli, E., & Adelson, E. (2002). Motion illusions as optimal perception Nature Neuroscience, 5:598-604.], in suggesting that perceptual bias reflects a compromise taken by a rational system in the face of uncertain signals and system constraints. © 2007.
Resumo:
Correlations between absenteeism and work attitudes such as job satisfaction have often been found to be disappointingly weak. As prior work reveals, this might be due to ignoring interactive effects of attitudes with different attitude targets (e.g. job involvement and organizational commitment). Drawing on basic principles in personality research and insights about the situational variability of job satisfaction judgments, we proposed that similar interactions should be present also for attitudes with the same target. More specifically, it was predicted that job involvement affects absenteeism more if job satisfaction is low as this indicates a situation with weak constraints. Both attitudes were assessed in a sample of 436 employees working in a large civil service organization, and two indexes of absence data (frequency and time lost) were drawn from personnel records covering a 12-month period following the survey. Whereas simple correlations were not significant, a moderated regression documented that the hypothesized interaction was significant for both indicators of absence behaviour. As a range of controls (e.g. age, gender, job level) were accounted for, these findings lend strong support to the importance of this new, specific form of attitude interaction. Thus, we encourage researchers not only to consider interactions of attitudes with a different focus (e.g. job vs. organization) but also interactions between job involvement and job satisfaction as this will yield new insights into the complex function of attitudes in influencing absenteeism. © 2007 The British Psychological Society.
Resumo:
Based on a corpus of English, German, and Polish spoken academic discourse, this article analyzes the distribution and function of humor in academic research presentations. The corpus is the result of a European research cooperation project consisting of 300,000 tokens of spoken academic language, focusing on the genres research presentation, student presentation, and oral examination. The article investigates difference between the German and English research cultures as expressed in the genre of specialist research presentations, and the role of humor as a pragmatic device in their respective contexts. The data is analyzed according to the paradigms of corpus-assisted discourse studies (CADS). The findings show that humor is used in research presentations as an expression of discourse reflexivity. They also reveal a considerable difference in the quantitative distribution of humor in research presentations depending on the educational, linguistic, and cultural background of the presenters, thus confirming the notion of different research cultures. Such research cultures nurture distinct attitudes to genres of academic language: whereas in one of the cultures identified researchers conform with the constraints and structures of the genre, those working in another attempt to subvert them, for example by the application of humor. © 2012 Elsevier B.V.
Resumo:
We investigate the impact of market-supporting institutions on business strategies by analyzing the entry strategies of foreign investors entering emerging economies. We apply and advance the institution-based view of strategy by integrating it with resource-based considerations. In particular, we show how resource-seeking strategies are pursued using different entry modes in different institutional contexts. Alternative modes of entry—greenfield, acquisition, and joint venture (JV)—allow firms to overcome different kinds of market inefficiencies related to both characteristics of the resources and to the institutional context. In a weaker institutional framework, JVs are used to access many resources, but in a stronger institutional framework, JVs become less important while acquisitions can play a more important role in accessing resources that are intangible and organizationally embedded. Combining survey and archival data from four emerging economies, India, Vietnam, South Africa, and Egypt, we provide empirical support for our hypotheses.
Resumo:
The deployment of bioenergy technologies is a key part of UK and European renewable energy policy. A key barrier to the deployment of bioenergy technologies is the management of biomass supply chains including the evaluation of suppliers and the contracting of biomass. In the undeveloped biomass for energy market buyers of biomass are faced with three major challenges during the development of new bioenergy projects. What characteristics will a certain supply of biomass have, how to evaluate biomass suppliers and which suppliers to contract with in order to provide a portfolio of suppliers that best satisfies the needs of the project and its stakeholder group whilst also satisfying crisp and non-crisp technological constraints. The problem description is taken from the situation faced by the industrial partner in this research, Express Energy Ltd. This research tackles these three areas separately then combines them to form a decision framework to assist biomass buyers with the strategic sourcing of biomass. The BioSS framework. The BioSS framework consists of three modes which mirror the development stages of bioenergy projects. BioSS.2 mode for early stage development, BioSS.3 mode for financial close stage and BioSS.Op for the operational phase of the project. BioSS is formed of a fuels library, a supplier evaluation module and an order allocation module, a Monte-Carlo analysis module is also included to evaluate the accuracy of the recommended portfolios. In each mode BioSS can recommend which suppliers should be contracted with and how much material should be purchased from each. The recommended blend should have chemical characteristics within the technological constraints of the conversion technology and also best satisfy the stakeholder group. The fuels library is made up from a wide variety of sources and contains around 100 unique descriptions of potential biomass sources that a developer may encounter. The library takes a wide data collection approach and has the aim of allowing for estimates to be made of biomass characteristics without expensive and time consuming testing. The supplier evaluation part of BioSS uses a QFD-AHP method to give importance weightings to 27 different evaluating criteria. The evaluating criteria have been compiled from interviews with stakeholders and policy and position documents and the weightings have been assigned using a mixture of workshops and expert interview. The weighted importance scores allow potential suppliers to better tailor their business offering and provides a robust framework for decision makers to better understand the requirements of the bioenergy project stakeholder groups. The order allocation part of BioSS uses a chance-constrained programming approach to assign orders of material between potential suppliers based on the chemical characteristics of those suppliers and the preference score of those suppliers. The optimisation program finds the portfolio of orders to allocate to suppliers to give the highest performance portfolio in the eyes of the stakeholder group whilst also complying with technological constraints. The technological constraints can be breached if the decision maker requires by setting the constraint as a chance-constraint. This allows a wider range of biomass sources to be procured and allows a greater overall performance to be realised than considering crisp constraints or using deterministic programming approaches. BioSS is demonstrated against two scenarios faced by UK bioenergy developers. The first is a large scale combustion power project, the second a small scale gasification project. The Bioss is applied in each mode for both scenarios and is shown to adapt the solution to the stakeholder group importance and the different constraints of the different conversion technologies whilst finding a globally optimal portfolio for stakeholder satisfaction.
Resumo:
Rework strategies that involve different checking points as well as rework times can be applied into reconfigurable manufacturing system (RMS) with certain constraints, and effective rework strategy can significantly improve the mission reliability of manufacturing process. The mission reliability of process is a measurement of production ability of RMS, which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. To quantitatively characterize the mission reliability and basic reliability of RMS under different rework strategies, rework model of RMS was established based on the method of Logistic regression. Firstly, the functional relationship between capability and work load of manufacturing process was studied through statistically analyzing a large number of historical data obtained in actual machining processes. Secondly, the output, mission reliability and unit cost in different rework paths were calculated and taken as the decision variables based on different input quantities and the rework model mentioned above. Thirdly, optimal rework strategies for different input quantities were determined by calculating the weighted decision values and analyzing advantages and disadvantages of each rework strategy. At last, case application were demonstrated to prove the efficiency of the proposed method.
Resumo:
Measurement and variation control of geometrical Key Characteristics (KCs), such as flatness and gap of joint faces, coaxiality of cabin sections, is the crucial issue in large components assembly from the aerospace industry. Aiming to control geometrical KCs and to attain the best fit of posture, an optimization algorithm based on KCs for large components assembly is proposed. This approach regards the posture best fit, which is a key activity in Measurement Aided Assembly (MAA), as a two-phase optimal problem. In the first phase, the global measurement coordinate system of digital model and shop floor is unified with minimum error based on singular value decomposition, and the current posture of components being assembly is optimally solved in terms of minimum variation of all reference points. In the second phase, the best posture of the movable component is optimally determined by minimizing multiple KCs' variation with the constraints that every KC respectively conforms to its product specification. The optimal models and the process procedures for these two-phase optimal problems based on Particle Swarm Optimization (PSO) are proposed. In each model, every posture to be calculated is modeled as a 6 dimensional particle (three movement and three rotation parameters). Finally, an example that two cabin sections of satellite mainframe structure are being assembled is selected to verify the effectiveness of the proposed approach, models and algorithms. The experiment result shows the approach is promising and will provide a foundation for further study and application. © 2013 The Authors.