940 resultados para linear approximation method
Resumo:
Over 60% of the recurrent budget of the Ministry of Health (MoH) in Angola is spent on the operations of the fixed health care facilities (health centres plus hospitals). However, to date, no study has been attempted to investigate how efficiently those resources are used to produce health services. Therefore the objectives of this study were to assess the technical efficiency of public municipal hospitals in Angola; assess changes in productivity over time with a view to analyzing changes in efficiency and technology; and demonstrate how the results can be used in the pursuit of the public health objective of promoting efficiency in the use of health resources. The analysis was based on a 3-year panel data from all the 28 public municipal hospitals in Angola. Data Envelopment Analysis (DEA), a non-parametric linear programming approach, was employed to assess the technical and scale efficiency and productivity change over time using Malmquist index.The results show that on average, productivity of municipal hospitals in Angola increased by 4.5% over the period 2000-2002; that growth was due to improvements in efficiency rather than innovation. © 2008 Springer Science+Business Media, LLC.
Resumo:
A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). © 2004 Elsevier Inc. All rights reserved.
Resumo:
We compare the Q parameter obtained from scalar, semi-analytical and full vector models for realistic transmission systems. One set of systems is operated in the linear regime, while another is using solitons at high peak power. We report in detail on the different results obtained for the same system using different models. Polarisation mode dispersion is also taken into account and a novel method to average Q parameters over several independent simulation runs is described. © 2006 Elsevier B.V. All rights reserved.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
This paper presents a novel methodology to infer parameters of probabilistic models whose output noise is a Student-t distribution. The method is an extension of earlier work for models that are linear in parameters to nonlinear multi-layer perceptrons (MLPs). We used an EM algorithm combined with variational approximation, the evidence procedure, and an optimisation algorithm. The technique was tested on two regression applications. The first one is a synthetic dataset and the second is gas forward contract prices data from the UK energy market. The results showed that forecasting accuracy is significantly improved by using Student-t noise models.
Resumo:
This paper presents a new method for the optimisation of the mirror element spacing arrangement and operating temperature of linear Fresnel reflectors (LFR). The specific objective is to maximise available power output (i.e. exergy) and operational hours whilst minimising cost. The method is described in detail and compared to an existing design method prominent in the literature. Results are given in terms of the exergy per total mirror area (W/m2) and cost per exergy (US $/W). The new method is applied principally to the optimisation of an LFR in Gujarat, India, for which cost data have been gathered. It is recommended to use a spacing arrangement such that the onset of shadowing among mirror elements occurs at a transversal angle of 45°. This results in a cost per exergy of 2.3 $/W. Compared to the existing design approach, the exergy averaged over the year is increased by 9% to 50 W/m2 and an additional 122 h of operation per year are predicted. The ideal operating temperature at the surface of the absorber tubes is found to be 300 °C. It is concluded that the new method is an improvement over existing techniques and a significant tool for any future design work on LFR systems
Resumo:
The work described in this thesis is concerned with mechanisms of contact lens lubrication. There are three major driving forces in contact lens design and development; cost, convenience, and comfort. Lubrication, as reflected in the coefficient of friction, is becoming recognised as one of the major factors affecting the comfort of the current generation of contact lenses, which have benefited from several decades of design and production improvements. This work started with the study of the in-eye release of soluble macromolecules from a contact lens matrix. The vehicle for the study was the family of CIBA Vision Focus® DAILIES® daily disposable contact lenses which is based on polyvinyl alcohol (PVA). The effective release of linear soluble PVA from DAILIES on the surface of the lens was shown to be beneficial in terms of patient comfort. There was a need to develop a novel characterisation technique in order to study these effects at surfaces; this led to the study of a novel tribological technique, which allowed the friction coefficients of different types of contact lenses to be measured reproducibly at genuinely low values. The tribometer needed the ability to accommodate the following features: (a) an approximation to eye lid load, (b) both new and ex-vivo lenses, (c) variations in substrate, (d) different ocular lubricants (including tears). The tribometer and measuring technique developed in this way was used to examine the surface friction and lubrication mechanisms of two different types of contact lenses: daily disposables and silicone hydrogels. The results from the tribometer in terms of both mean friction coefficient and the friction profiles obtained allowed various mechanisms used for surface enhancement now seen in the daily disposable contact lens sector to be evaluated. The three major methods used are: release of soluble macromolecules (such as PVA) from the lens matrix, irreversible surface binding of a macromolecule (such as polyvinyl pyrrolidone) by charge transfer and the simple polymer adsorption (e.g. Pluoronic) at the lens surface. The tribological technique was also used to examine the trends in the development of silicone hydrogel contact lenses. The focus of the principles in the design of silicone hydrogels has now shifted from oxygen permeability, to the improvement of surface properties. Presently, tribological studies reflect the most effective in vitro method of surface evaluation in relation to the in-eye comfort.
Spatial pattern analysis of beta-amyloid (A beta) deposits in Alzheimer disease by linear regression
Resumo:
The spatial patterns of discrete beta-amyloid (Abeta) deposits in brain tissue from patients with Alzheimer disease (AD) were studied using a statistical method based on linear regression, the results being compared with the more conventional variance/mean (V/M) method. Both methods suggested that Abeta deposits occurred in clusters (400 to <12,800 mu m in diameter) in all but 1 of the 42 tissues examined. In many tissues, a regular periodicity of the Abeta deposit clusters parallel to the tissue boundary was observed. In 23 of 42 (55%) tissues, the two methods revealed essentially the same spatial patterns of Abeta deposits; in 15 of 42 (36%), the regression method indicated the presence of clusters at a scale not revealed by the V/M method; and in 4 of 42 (9%), there was no agreement between the two methods. Perceived advantages of the regression method are that there is a greater probability of detecting clustering at multiple scales, the dimension of larger Abeta clusters can be estimated more accurately, and the spacing between the clusters may be estimated. However, both methods may be useful, with the regression method providing greater resolution and the V/M method providing greater simplicity and ease of interpretation. Estimates of the distance between regularly spaced Abeta clusters were in the range 2,200-11,800 mu m, depending on tissue and cluster size. The regular periodicity of Abeta deposit clusters in many tissues would be consistent with their development in relation to clusters of neurons that give rise to specific neuronal projections.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.
Resumo:
In this paper we examine the equilibrium states of finite amplitude flow in a horizontal fluid layer with differential heating between the two rigid boundaries. The solutions to the Navier-Stokes equations are obtained by means of a perturbation method for evaluating the Landau constants and through a Newton-Raphson iterative method that results from the Fourier expansion of the solutions that bifurcate above the linear stability threshold of infinitesimal disturbances. The results obtained from these two different methods of evaluating the convective flow are compared in the neighborhood of the critical Rayleigh number. We find that for small Prandtl numbers the discrepancy of the two methods is noticeable. © 2009 The Physical Society of Japan.
Resumo:
We analyze the stochastic creation of a single bound state (BS) in a random potential with a compact support. We study both the Hermitian Schrödinger equation and non-Hermitian Zakharov-Shabat systems. These problems are of special interest in the inverse scattering method for Korteveg–de-Vries and the nonlinear Schrödinger equations since soliton solutions of these two equations correspond to the BSs of the two aforementioned linear eigenvalue problems. Analytical expressions for the average width of the potential required for the creation of the first BS are given in the approximation of delta-correlated Gaussian potential and additionally different scenarios of eigenvalue creation are discussed for the non-Hermitian case.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
Numerical techniques have been finding increasing use in all aspects of fracture mechanics, and often provide the only means for analyzing fracture problems. The work presented here, is concerned with the application of the finite element method to cracked structures. The present work was directed towards the establishment of a comprehensive two-dimensional finite element, linear elastic, fracture analysis package. Significant progress has been made to this end, and features which can now be studied include multi-crack tip mixed-mode problems, involving partial crack closure. The crack tip core element was refined and special local crack tip elements were employed to reduce the element density in the neighbourhood of the core region. The work builds upon experience gained by previous research workers and, as part of the general development, the program was modified to incorporate the eight-node isoparametric quadrilateral element. Also. a more flexible solving routine was developed, and provided a very compact method of solving large sets of simultaneous equations, stored in a segmented form. To complement the finite element analysis programs, an automatic mesh generation program has been developed, which enables complex problems. involving fine element detail, to be investigated with a minimum of input data. The scheme has proven to be versati Ie and reasonably easy to implement. Numerous examples are given to demonstrate the accuracy and flexibility of the finite element technique.
Resumo:
It is well established that hydrodynamic journal bearings are responsible for self-excited vibrations and have the effect of lowering the critical speeds of rotor systems. The forces within the oil film wedge, generated by the vibrating journal, may be represented by displacement and velocity coefficient~ thus allowing the dynamical behaviour of the rotor to be analysed both for stability purposes and for anticipating the response to unbalance. However, information describing these coefficients is sparse, misleading, and very often not applicable to industrial type bearings. Results of a combined analytical and experimental investigation into the hydrodynamic oil film coefficients operating in the laminar region are therefore presented, the analysis being applied to a 120 degree partial journal bearing having a 5.0 in diameter journal and a LID ratio of 1.0. The theoretical analysis shows that for this type of popular bearing, the eight linearized coefficients do not accurately describe the behaviour of the vibrating journal based on the theory of small perturbations, due to them being masked by the presence of nonlinearity. A method is developed using the second order terms of Taylor expansion whereby design charts are provided which predict the twentyeight force coefficients for both aligned, and for varying amounts of journal misalignment. The resulting non-linear equations of motion are solved using a modified Newton-Raphson method whereby the whirl trajectories are obtained, thus providing a physical appreciation of the bearing characteristics under dynamically loaded conditions.