965 resultados para Linear boundary value control problems
Resumo:
In this work the solution of a class of capital investment problems is considered within the framework of mathematical programming. Upon the basis of the net present value criterion, the problems in question are mainly characterized by the fact that the cost of capital is defined as a non-decreasing function of the investment requirements. Capital rationing and some cases of technological dependence are also included, this approach leading to zero-one non-linear programming problems, for which specifically designed solution procedures supported by a general branch and bound development are presented. In the context of both this development and the relevant mathematical properties of the previously mentioned zero-one programs, a generalized zero-one model is also discussed. Finally,a variant of the scheme, connected with the search sequencing of optimal solutions, is presented as an alternative in which reduced storage limitations are encountered.
Resumo:
The work described in this thesis is directed towards the reduction of noise levels in the Hoover Turbopower upright vacuum cleaner. The experimental work embodies a study of such factors as the application of noise source identification techniques, investigation of the noise generating principles for each major source and evaluation of the noise reducing treatments. It was found that the design of the vacuum cleaner had not been optimised from the standpoint of noise emission. Important factors such as noise `windows', isolation of vibration at the source, panel rattle, resonances and critical speeds had not been considered. Therefore, a number of experimentally validated treatments are proposed. Their noise reduction benefit together with material and tooling costs are presented. The solutions to the noise problems were evaluated on a standard Turbopower and the sound power level of the cleaner was reduced from 87.5 dB(A) to 80.4 db(A) at a cost of 93.6 pence per cleaner.The designers' lack of experience in noise reduction was identified as one of the factors for the low priority given to noise during design of the cleaner. Consequently, the fundamentals of acoustics, principles of noise prediction and absorption and guidelines for good acoustical design were collated into a Handbook and circulated at Hoover plc.Mechanical variations during production of the motor and the cleaner were found to be important. These caused a vast spread in the noise levels of the cleaners. Subsequently, the manufacturing processes were briefly studied to identify their source and recommendations for improvement are made.Noise of a product is quality related and a high level of noise is considered to be a bad feature. This project suggested that the noise level be used constructively both as a test on the production line to identify cleaners above a certain noise level and also to promote the product by `designing' the characteristics of the sound so that the appliance is pleasant to the user. This project showed that good noise control principles should be implemented early in the design stage.As yet there are no mandatory noise limits or noise-labelling requirements for household appliances. However, the literature suggests that noise-labelling is likely in the near future and the requirement will be to display the A-weighted sound power level. However, the `noys' scale of perceived noisiness was found more appropriate to the rating of appliance noise both as it is linear and therefore, a sound level that seems twice as loud is twice the value in noys and also takes into consideration the presence of pure tones, which even in the absence of a high noise level can lead to annoyance.
Resumo:
Full text: Several Lancet publications have questioned the value of glycaemic control in diabetic patients. For example, in their Comment (Sept 29, p 1103),1 John Cleland and Stephen Atkin state that “Improved glycaemic control is not a surrogate for effective care of patients who have diabetes”, and Victor Montori and colleagues (p 1104)2 claim that “HbA1c loses its validity as a surrogate marker when patients have a constellation of metabolic abnormalities”. We are concerned that the reaction against “glucocentricity” in the field of diabetes has gone too far. Even the UK's National Prescribing Centre website, carrying the National Health Service logo, includes comments that undermine the value of glycaemic control. For example, referring to the United Kingdom Prospective Diabetes Study (UKPDS), this site states that “Compared with ‘conventional control’ there was no benefit from tight control of blood glucose with sulphonylureas or insulin with regard to total mortality, diabetes-related death, macrovascular outcomes or microvascular outcomes, including all the most serious ones such as blindness or kidney failure”.3 It is well established that better glycaemic control reduces long-term microvascular complications in type 1 and type 2 diabetes.4 In type 2 diabetes, the UKPDS reported that a composite microvascular endpoint (retinopathy requiring photocoagulation, vitreous haemorrhage, and fatal or non-fatal renal failure) was reduced by 25% in patients randomised to intensive glucose control (p=0·0099).4 To imply that these are not patient-relevant outcomes is to distort the evidence. Many studies have also found that improved glycaemic control reduces macrovascular complications.5 Do not be misled: glycaemic control remains a crucial component in the care of people with diabetes. The authors have received research support and undertaken ad hoc consultancies and speaker engagements for several pharmaceutical companies.
Resumo:
We propose two algorithms involving the relaxation of either the given Dirichlet data (boundary displacements) or the prescribed Neumann data (boundary tractions) on the over-specified boundary in the case of the alternating iterative algorithm of Kozlov et al. [16] applied to Cauchy problems in linear elasticity. A convergence proof of these relaxation methods is given, along with a stopping criterion. The numerical results obtained using these procedures, in conjunction with the boundary element method (BEM), show the numerical stability, convergence, consistency and computational efficiency of the proposed method.
Resumo:
The paper considers vector discrete optimization problem with linear fractional functions of criteria on a feasible set that has combinatorial properties of combinations. Structural properties of a feasible solution domain and of Pareto–optimal (efficient), weakly efficient, strictly efficient solution sets are examined. A relation between vector optimization problems on a combinatorial set of combinations and on a continuous feasible set is determined. One possible approach is proposed in order to solve a multicriteria combinatorial problem with linear- fractional functions of criteria on a set of combinations.
Resumo:
The aim of this study is to address the main deficiencies with the prevailing project cost and time control practices for construction projects in the UK. A questionnaire survey was carried out with 250 top companies followed by in-depth interviews with 15 experienced practitioners from these companies in order to gain further insights of the identified problems, and their experience of good practice on how these problems can be tackled. On the basis of these interviews and syntheses with literature, a list of 65 good practice recommendations have been developed for the key project control tasks: planning, monitoring, reporting and analysing. The Delphi method was then used, with the participation of a panel of 8 practitioner experts, to evaluate these improvement recommendations and to establish their degree of relevance. After two rounds of Delphi, these recommendations are put forward as "critical", "important", or "helpful" measures for improving project control practice.
Resumo:
In the present paper the problems of the optimal control of systems when constraints are imposed on the control is considered. The optimality conditions are given in the form of Pontryagin’s maximum principle. The obtained piecewise linear function is approximated by using feedforward neural network. A numerical example is given.
Resumo:
We present quasi-Monte Carlo analogs of Monte Carlo methods for some linear algebra problems: solving systems of linear equations, computing extreme eigenvalues, and matrix inversion. Reformulating the problems as solving integral equations with a special kernels and domains permits us to analyze the quasi-Monte Carlo methods with bounds from numerical integration. Standard Monte Carlo methods for integration provide a convergence rate of O(N^(−1/2)) using N samples. Quasi-Monte Carlo methods use quasirandom sequences with the resulting convergence rate for numerical integration as good as O((logN)^k)N^(−1)). We have shown theoretically and through numerical tests that the use of quasirandom sequences improves both the magnitude of the error and the convergence rate of the considered Monte Carlo methods. We also analyze the complexity of considered quasi-Monte Carlo algorithms and compare them to the complexity of the analogous Monte Carlo and deterministic algorithms.
Resumo:
It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.
Resumo:
AMS subject classification: 90C31, 90A09, 49K15, 49L20.
Resumo:
We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.
Resumo:
The determination of the displacement and the space-dependent force acting on a vibrating structure from measured final or time-average displacement observation is thoroughly investigated. Several aspects related to the existence and uniqueness of a solution of the linear but ill-posed inverse force problems are highlighted. After that, in order to capture the solution a variational formulation is proposed and the gradient of the least-squares functional that is minimized is rigorously and explicitly derived. Numerical results obtained using the Landweber method and the conjugate gradient method are presented and discussed illustrating the convergence of the iterative procedures for exact input data. Furthermore, for noisy data the semi-convergence phenomenon appears, as expected, and stability is restored by stopping the iterations according to the discrepancy principle criterion once the residual becomes close to the amount of noise. The present investigation will be significant to researchers concerned with wave propagation and control of vibrating structures.