35 resultados para the least squares distance method
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
1. Cluster analysis of reference sites with similar biota is the initial step in creating River Invertebrate Prediction and Classification System (RIVPACS) and similar river bioassessment models such as Australian River Assessment System (AUSRIVAS). This paper describes and tests an alternative prediction method, Assessment by Nearest Neighbour Analysis (ANNA), based on the same philosophy as RIVPACS and AUSRIVAS but without the grouping step that some people view as artificial. 2. The steps in creating ANNA models are: (i) weighting the predictor variables using a multivariate approach analogous to principal axis correlations, (ii) calculating the weighted Euclidian distance from a test site to the reference sites based on the environmental predictors, (iii) predicting the faunal composition based on the nearest reference sites and (iv) calculating an observed/expected (O/E) analogous to RIVPACS/AUSRIVAS. 3. The paper compares AUSRIVAS and ANNA models on 17 datasets representing a variety of habitats and seasons. First, it examines each model's regressions for Observed versus Expected number of taxa, including the r(2), intercept and slope. Second, the two models' assessments of 79 test sites in New Zealand are compared. Third, the models are compared on test and presumed reference sites along a known trace metal gradient. Fourth, ANNA models are evaluated for western Australia, a geographically distinct region of Australia. The comparisons demonstrate that ANNA and AUSRIVAS are generally equivalent in performance, although ANNA turns out to be potentially more robust for the O versus E regressions and is potentially more accurate on the trace metal gradient sites. 4. The ANNA method is recommended for use in bioassessment of rivers, at least for corroborating the results of the well established AUSRIVAS- and RIVPACS-type models, if not to replace them.
Resumo:
In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al. , 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
Background: The Royal Australian and New Zealand College of Psychiatrists is co-ordinating the development of clinical practice guidelines (CPGs) in psychiatry, funded under the National Mental Health Strategy (Australia) and the New Zealand Health Funding Authority. This paper presents CPGs for schizophrenia and related disorders. Over the past decade schizophrenia has become more treatable than ever before. A new generation of drug therapies, a renaissance of psychological and psychosocial interventions and a first generation of reform within the specialist mental health system have combined to create an evidence-based climate of realistic optimism. Progressive neuroscientific advances hold out the strong possibility of more definitive biological treatments in the near future. However, this improved potential for better outcomes and quality of life for people with schizophrenia has not been translated into reality in Australia. The efficacy-effectiveness gap is wider for schizophrenia than any other serious medical disorder. Therapeutic nihilism, under-resourcing of services and a stalling of the service reform process, poor morale within specialist mental health services, a lack of broad-based recovery and life support programs, and a climate of tenacious stigma and consequent lack of concern for people with schizophrenia are the contributory causes for this failure to effectively treat. These guidelines therefore tackle only one element in the endeavour to reduce the impact of schizophrenia. They distil the current evidence-base and make recommendations based on the best available knowledge. Method: A comprehensive literature review (1990-2003) was conducted, including all Cochrane schizophrenia reviews and all relevant meta-analyses, and a number of recent international clinical practice guidelines were consulted. A series of drafts were refined by the expert committee and enhanced through a bi-national consultation process. Treatment recommendations: This guideline provides evidence-based recommendations for the management of schizophrenia by treatment type and by phase of illness. The essential features of the guidelines are: (i) Early detection and comprehensive treatment of first episode cases is a priority since the psychosocial and possibly the biological impact of illness can be minimized and outcome improved. An optimistic attitude on the part of health professionals is an essential ingredient from the outset and across all phases of illness. (ii) Comprehensive and sustained intervention should be assured during the initial 3-5 years following diagnosis since course of illness is strongly influenced by what occurs in this 'critical period'. Patients should not have to 'prove chronicity' before they gain consistent access and tenure to specialist mental health services. (iii) Antipsychotic medication is the cornerstone of treatment. These medicines have improved in quality and tolerability, yet should be used cautiously and in a more targeted manner than in the past. The treatment of choice for most patients is now the novel antipsychotic medications because of their superior tolerability and, in particular, the reduced risk of tardive dyskinesia. This is particularly so for the first episode patient where, due to superior tolerability, novel agents are the first, second and third line choice. These novel agents are nevertheless associated with potentially serious medium to long-term side-effects of their own for which patients must be carefully monitored. Conventional antipsychotic medications in low dosage may still have a role in a small proportion of patients, where there has been full remission and good tolerability; however, the indications are shrinking progressively. These principles are now accepted in most developed countries. (vi) Clozapine should be used early in the course, as soon as treatment resistance to at least two antipsychotics has been demonstrated. This usually means incomplete remission of positive symptomatology, but clozapine may also be considered where there are pervasive negative symptoms or significant or persistent suicidal risk is present. (v) Comprehensive psychosocial interventions should be routinely available to all patients and their families, and provided by appropriately trained mental health professionals with time to devote to the task. This includes family interventions, cognitive-behaviour therapy, vocational rehabilitation and other forms of therapy, especially for comorbid conditions, such as substance abuse, depression and anxiety. (vi) The social and cultural environment of people with schizophrenia is an essential arena for intervention. Adequate shelter, financial security, access to meaningful social roles and availability of social support are essential components of recovery and quality of life. (vii) Interventions should be carefully tailored to phase and stage of illness, and to gender and cultural background. (viii) Genuine involvement of consumers and relatives in service development and provision should be standard. (ix) Maintenance of good physical health and prevention and early treatment of serious medical illness has been seriously neglected in the management of schizophrenia, and results in premature death and widespread morbidity. Quality of medical care for people with schizophrenia should be equivalent to the general community standard. (x) General practitioners (GPs)s should always be closely involved in the care of people with schizophrenia. However, this should be truly shared care, and sole care by a GP with minimal or no special Optimal treatment of schizophrenia requires a multidisciplinary team approach with a consultant psychiatrist centrally involved.
Resumo:
To date very Few families of critical sets for latin squares are known. The only previously known method for constructing critical sets involves taking a critical set which is known to satisfy certain strong initial conditions and using a doubling construction. This construction can be applied to the known critical sets in back circulant latin squares of even order. However, the doubling construction cannot be applied to critical sets in back circulant latin squares of odd order. In this paper a family of critical sets is identified for latin squares which are the product of the latin square of order 2 with a back circulant latin square of odd order. The proof that each element of the critical set is an essential part of the reconstruction process relies on the proof of the existence of a large number of latin interchanges.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Resumo:
In this paper we present the composite Euler method for the strong solution of stochastic differential equations driven by d-dimensional Wiener processes. This method is a combination of the semi-implicit Euler method and the implicit Euler method. At each step either the semi-implicit Euler method or the implicit Euler method is used in order to obtain better stability properties. We give criteria for selecting the semi-implicit Euler method or the implicit Euler method. For the linear test equation, the convergence properties of the composite Euler method depend on the criteria for selecting the methods. Numerical results suggest that the convergence properties of the composite Euler method applied to nonlinear SDEs is the same as those applied to linear equations. The stability properties of the composite Euler method are shown to be far superior to those of the Euler methods, and numerical results show that the composite Euler method is a very promising method. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The microwave and thermal cure processes for the epoxy-amine systems N,N,N',N'-tetraglycidyl-4,4'-diaminodiphenyl methane (TGDDM) with diaminodiphenyl sulfone (DDS) and diaminodiphenyl methane (DDM) have been investigated. The DDS system was studied at a single cure temperature of 433 K and a single stoichiometry of 27 wt% and the DDM system was studied at two stoichiometries, 19 and 32 wt%, and a range temperatures between 373 and 413 K. The best values the kinetic rate parameters for the consumption of amines have been determined by a least squares curve Ft to a model for epoxy-amine cure. The activation energies for the rate parameters for the MY721/DDM system were determined as was the overall activation energy for the cure reaction which was found to be 62 kJ mol(-1). No evidence was found for any specific effect of the microwave radiation on the rate parameters, and the systems were both found to be characterized by a negative substitution effect. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
This paper examines the trade relationship between the Gulf Cooperation Council (GCC) and the European Union (EU). A simultaneous equation regression model is developed and estimated to assist with the analysis. The regression results, using both the two stage least squares (2SLS) and ordinary least squares (OLS) estimation methods, reveal the existence of feedback effects between the two economic integrations. The results also show that during times of slack in oil prices, the GCC income from its investments overseas helped to finance its imports from the EU.
Resumo:
The bulk free radical copolymerization of 2-hydroxyethyl methacrylate (HEMA) with N-vinyl-2-pyrrolidone (VP) was carried out to low conversions at 50 degreesC, using benzoyl peroxide (BPO) as initiator. The compositions of the copolymers; were determined using C-13 NMR spectroscopy. The conversion of monomers to polymers was studied using FT-NIR spectroscopy in order to predict the extent of conversion of monomer to polymer. From model fits to the composition data, a statistical F-test revealed that die penultimate model describes die copolymerization better than die terminal model. Reactivity ratios were calculated by using a non-linear least squares analysis (NLLS) and r(H) = 8.18 and r(V) = 0.097 were found to be the best fit values of the reactivity ratios for the terminal model and r(HH) = 12.0, r(VH) = 2.20, r(VV) = 0.12 and r(HV) = 0.03 for the penultimate model. Predictions were made for changes in compositions as a function of conversion based upon the terminal and penultimate models.