41 resultados para Semi-infinite linear programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here mixed convection boundary layer flow of a viscous fluid along a heated vertical semi-infinite plate is investigated in a non-absorbing medium. The relationship between convection and thermal radiation is established via boundary condition of second kind on the thermally radiating vertical surface. The governing boundary layer equations are transformed into dimensionless parabolic partial differential equations with the help of appropriate transformations and the resultant system is solved numerically by applying straightforward finite difference method along with Gaussian elimination technique. It is worthy to note that Prandlt number, Pr, is taken to be small (<< 1) which is appropriate for liquid metals. Moreover, the numerical results are demonstrated graphically by showing the effects of important physical parameters, namely, the modified Richardson number (or mixed convection parameter), Ri*, and surface radiation parameter, R, in terms of local skin friction and local Nusselt number coefficients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to describe a new decomposition construction for perfect secret sharing schemes with graph access structures. The previous decomposition construction proposed by Stinson is a recursive method that uses small secret sharing schemes as building blocks in the construction of larger schemes. When the Stinson method is applied to the graph access structures, the number of such “small” schemes is typically exponential in the number of the participants, resulting in an exponential algorithm. Our method has the same flavor as the Stinson decomposition construction; however, the linear programming problem involved in the construction is formulated in such a way that the number of “small” schemes is polynomial in the size of the participants, which in turn gives rise to a polynomial time construction. We also show that if we apply the Stinson construction to the “small” schemes arising from our new construction, both have the same information rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Similarity solutions are carried out for flow of power law non-Newtonian fluid film on unsteady stretching surface subjected to constant heat flux. Free convection heat transfer induces thermal boundary layer within a semi-infinite layer of Boussinesq fluid. The nonlinear coupled partial differential equations (PDE) governing the flow and the boundary conditions are converted to a system of ordinary differential equations (ODE) using two-parameter groups. This technique reduces the number of independent variables by two, and finally the obtained ordinary differential equations are solved numerically for the temperature and velocity using the shooting method. The thermal and velocity boundary layers are studied by the means of Prandtl number and non-Newtonian power index plotted in curves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past few years, there has been a steady increase in the attention, importance and focus of green initiatives related to data centers. While various energy aware measures have been developed for data centers, the requirement of improving the performance efficiency of application assignment at the same time has yet to be fulfilled. For instance, many energy aware measures applied to data centers maintain a trade-off between energy consumption and Quality of Service (QoS). To address this problem, this paper presents a novel concept of profiling to facilitate offline optimization for a deterministic application assignment to virtual machines. Then, a profile-based model is established for obtaining near-optimal allocations of applications to virtual machines with consideration of three major objectives: energy cost, CPU utilization efficiency and application completion time. From this model, a profile-based and scalable matching algorithm is developed to solve the profile-based model. The assignment efficiency of our algorithm is then compared with that of the Hungarian algorithm, which does not scale well though giving the optimal solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Circular shortest paths represent a powerful methodology for image segmentation. The circularity condition ensures that the contour found by the algorithm is closed, a natural requirement for regular objects. Several implementations have been proposed in the past that either promise closure with high probability or ensure closure strictly, but with a mild computational efficiency handicap. Circularity can be viewed as a priori information that helps recover the correct object contour. Our "observation" is that circularity is only one among many possible constraints that can be imposed on shortest paths to guide them to a desirable solution. In this contribution, we illustrate this opportunity under a volume constraint but the concept is generally applicable. We also describe several adornments to the circular shortest path algorithm that proved useful in applications. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a recommendation system that supports process participants in taking risk-informed decisions, with the goal of reducing risks that may arise during process execution. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a business process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we suggest to the participant the action to perform which minimizes the predicted process risk. Risks are predicted by traversing decision trees generated from the logs of past process executions, which consider process data, involved resources, task durations and other information elements like task frequencies. When applied in the context of multiple process instances running concurrently, a second technique is employed that uses integer linear programming to compute the optimal assignment of resources to tasks to be performed, in order to deal with the interplay between risks relative to different instances. The recommendation system has been implemented as a set of components on top of the YAWL BPM system and its effectiveness has been evaluated using a real-life scenario, in collaboration with risk analysts of a large insurance company. The results, based on a simulation of the real-life scenario and its comparison with the event data provided by the company, show that the process instances executed concurrently complete with significantly fewer faults and with lower fault severities, when the recommendations provided by our recommendation system are taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organisations are constantly seeking new ways to improve operational efficiencies. This study investigates a novel way to identify potential efficiency gains in business operations by observing how they were carried out in the past and then exploring better ways of executing them by taking into account trade-offs between time, cost and resource utilisation. This paper demonstrates how these trade-offs can be incorporated in the assessment of alternative process execution scenarios by making use of a cost environment. A number of optimisation techniques are proposed to explore and assess alternative execution scenarios. The objective function is represented by a cost structure that captures different process dimensions. An experimental evaluation is conducted to analyse the performance and scalability of the optimisation techniques: integer linear programming (ILP), hill climbing, tabu search, and our earlier proposed hybrid genetic algorithm approach. The findings demonstrate that the hybrid genetic algorithm is scalable and performs better compared to other techniques. Moreover, we argue that the use of ILP is unrealistic in this setup and cannot handle complex cost functions such as the ones we propose. Finally, we show how cost-related insights can be gained from improved execution scenarios and how these can be utilised to put forward recommendations for reducing process-related cost and overhead within organisations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study estimates the environmental efficiency of international listed firms in 10 worldwide sectors from 2007 to 2013 by applying an order-m method, a non-parametric approach based on free disposal hull with subsampling bootstrapping. Using a conventional output of gross profit and two conventional inputs of labor and capital, this study examines the order-m environmental efficiency accounting for the presence of each of 10 undesirable inputs/outputs and measures the shadow prices of each undesirable input and output. The results show that there is greater potential for the reduction of undesirable inputs rather than bad outputs. On average, total energy, electricity, or water usage has the potential to be reduced by 50%. The median shadow prices of undesirable inputs, however, are much higher than the surveyed representative market prices. Approximately 10% of the firms in the sample appear to be potential sellers or production reducers in terms of undesirable inputs/outputs, which implies that the price of each item at the current level has little impact on most of the firms. Moreover, this study shows that the environmental, social, and governance activities of a firm do not considerably affect environmental efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, natural convection boundary layer flow is investigated over a semi-infinite horizontal wavy surface. Such an irregular (wavy) surface is used to exchange heat with an external radiating fluid which obeys Rosseland diffusion approximation. The boundary layer equations are cast into dimensionless form by introducing appropriate scaling. Primitive variable formulations (PVF) and stream function formulations (SFF) are independently used to transform the boundary layer equations into convenient form. The equations obtained from the former formulations are integrated numerically via implicit finite difference iterative scheme whereas equations obtained from lateral formulations are simulated through Keller-box scheme. To validate the results, solutions produced by above two methods are compared graphically. The main parameters: thermal radiation parameter and amplitude of the wavy surface are discussed categorically in terms of shear stress and rate of heat transfer. It is found that wavy surface increases heat transfer rate compared to the smooth wall. Thus optimum heat transfer is accomplished when irregular surface is considered. It is also established that high amplitude of the wavy surface in the boundary layer leads to separation of fluid from the plate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adaptions of weighted rank regression to the accelerated failure time model for censored survival data have been successful in yielding asymptotically normal estimates and flexible weighting schemes to increase statistical efficiencies. However, for only one simple weighting scheme, Gehan or Wilcoxon weights, are estimating equations guaranteed to be monotone in parameter components, and even in this case are step functions, requiring the equivalent of linear programming for computation. The lack of smoothness makes standard error or covariance matrix estimation even more difficult. An induced smoothing technique overcame these difficulties in various problems involving monotone but pure jump estimating equations, including conventional rank regression. The present paper applies induced smoothing to the Gehan-Wilcoxon weighted rank regression for the accelerated failure time model, for the more difficult case of survival time data subject to censoring, where the inapplicability of permutation arguments necessitates a new method of estimating null variance of estimating functions. Smooth monotone parameter estimation and rapid, reliable standard error or covariance matrix estimation is obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article contributes an original integrated model of an open-pit coal mine for supporting energy-efficient decisions. Mixed integer linear programming is used to formulate a general integrated model of the operational energy consumption of four common open-pit coal mining subsystems: excavation and haulage, stockpiles, processing plants and belt conveyors. Mines are represented as connected instances of the four subsystems, in a flow sheet manner, which are then fitted to data provided by the mine operators. Solving the integrated model ensures the subsystems’ operations are synchronised and whole-of-mine energy efficiency is encouraged. An investigation on a case study of an open-pit coal mine is conducted to validate the proposed methodology. Opportunities are presented for using the model to aid energy-efficient decision-making at various levels of a mine, and future work to improve the approach is described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates factors that impact the energy efficiency of a mining operation. An innovative mathematical framework and solution approach are developed to model, solve and analyse an open-pit coal mine. A case study in South East Queensland is investigated to validate the approach and explore the opportunities for using it to aid long, medium and short term decision makers.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making