43 resultados para Empirical Algorithm Analysis
Resumo:
Popper's explications of 'ad hoc' in relation to hypotheses and explanations turn out to be either trivial, confused or mistaken. One such explication I discuss at length is circularity; another is reduction in empirical content. I argue that non-circularity is preferable to non-ad hocness for an acceptable explanation or explanans, and I isolate some persistent errors in his analysis. Second, Popper is barking up the wrong tree in proscribing reductions in empirical content in novel hypotheses. Such reductions may constitute scientific progress. He fails to show that ad hoc hypothesis are the threat to science he claims.
Resumo:
The XSophe-Sophe-XeprView((R)) computer simulation software suite enables scientists to easily determine spin Hamiltonian parameters from isotropic, randomly oriented and single crystal continuous wave electron paramagnetic resonance (CW EPR) spectra from radicals and isolated paramagnetic metal ion centers or clusters found in metalloproteins, chemical systems and materials science. XSophe provides an X-windows graphical user interface to the Sophe programme and allows: creation of multiple input files, local and remote execution of Sophe, the display of sophelog (output from Sophe) and input parameters/files. Sophe is a sophisticated computer simulation software programme employing a number of innovative technologies including; the Sydney OPera HousE (SOPHE) partition and interpolation schemes, a field segmentation algorithm, the mosaic misorientation linewidth model, parallelization and spectral optimisation. In conjunction with the SOPHE partition scheme and the field segmentation algorithm, the SOPHE interpolation scheme and the mosaic misorientation linewidth model greatly increase the speed of simulations for most spin systems. Employing brute force matrix diagonalization in the simulation of an EPR spectrum from a high spin Cr(III) complex with the spin Hamiltonian parameters g(e) = 2.00, D = 0.10 cm(-1), E/D = 0.25, A(x) = 120.0, A(y) = 120.0, A(z) = 240.0 x 10(-4) cm(-1) requires a SOPHE grid size of N = 400 (to produce a good signal to noise ratio) and takes 229.47 s. In contrast the use of either the SOPHE interpolation scheme or the mosaic misorientation linewidth model requires a SOPHE grid size of only N = 18 and takes 44.08 and 0.79 s, respectively. Results from Sophe are transferred via the Common Object Request Broker Architecture (CORBA) to XSophe and subsequently to XeprView((R)) where the simulated CW EPR spectra (1D and 2D) can be compared to the experimental spectra. Energy level diagrams, transition roadmaps and transition surfaces aid the interpretation of complicated randomly oriented CW EPR spectra and can be viewed with a web browser and an OpenInventor scene graph viewer.
Resumo:
A hydraulic jump is characterized by strong energy dissipation and mixing, large-scale turbulence, air entrainment, waves and spray. Despite recent pertinent studies, the interaction between air bubbles diffusion and momentum transfer is not completely understood. The objective of this paper is to present experimental results from new measurements performed in rectangular horizontal flume with partially-developed inflow conditions. The vertical distributions of void fraction and air bubbles count rate were recorded for inflow Froude number Fr1 in the range from 5.2 to 14.3. Rapid detrainment process was observed near the jump toe, whereas the structure of the air diffusion layer was clearly observed over longer distances. These new data were compared with previous data generally collected at lower Froude numbers. The comparison demonstrated that, at a fixed distance from the jump toe, the maximum void fraction Cmax increases with the increasing Fr1. The vertical locations of the maximum void fraction and bubble count rate were consistent with previous studies. Finally, an empirical correlation between the upper boundary of the air diffusion layer and the distance from the impingement point was provided.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
The second edition of An Introduction to Efficiency and Productivity Analysis is designed to be a general introduction for those who wish to study efficiency and productivity analysis. The book provides an accessible, well-written introduction to the four principal methods involved: econometric estimation of average response models; index numbers, data envelopment analysis (DEA); and stochastic frontier analysis (SFA). For each method, a detailed introduction to the basic concepts is presented, numerical examples are provided, and some of the more important extensions to the basic methods are discussed. Of special interest is the systematic use of detailed empirical applications using real-world data throughout the book. In recent years, there have been a number of excellent advance-level books published on performance measurement. This book, however, is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners. Indeed, the 2nd Edition maintains its uniqueness: (1) It is a well-written introduction to the field. (2) It outlines, discusses and compares the four principal methods for efficiency and productivity analysis in a well-motivated presentation. (3) It provides detailed advice on computer programs that can be used to implement these performance measurement methods. The book contains computer instructions and output listings for the SHAZAM, LIMDEP, TFPIP, DEAP and FRONTIER computer programs. More extensive listings of data and computer instruction files are available on the book's website: (www.uq.edu.au/economics/cepa/crob2005).
Resumo:
A simple theoretical framework is presented for bioassay studies using three component in vitro systems. An equilibrium model is used to derive equations useful for predicting changes in biological response after addition of hormone-binding-protein or as a consequence of increased hormone affinity. Sets of possible solutions for receptor occupancy and binding protein occupancy are found for typical values of receptor and binding protein affinity constants. Unique equilibrium solutions are dictated by the initial condition of total hormone concentration. According to the occupancy theory of drug action, increasing the affinity of a hormone for its receptor will result in a proportional increase in biological potency. However, the three component model predicts that the magnitude of increase in biological potency will be a small fraction of the proportional increase in affinity. With typical initial conditions a two-fold increase in hormone affinity for its receptor is predicted to result in only a 33% increase in biological response. Under the same conditions an Ii-fold increase in hormone affinity for receptor would be needed to produce a two-fold increase in biological potency. Some currently used bioassay systems may be unrecognized three component systems and gross errors in biopotency estimates will result if the effect of binding protein is not calculated. An algorithm derived from the three component model is used to predict changes in biological response after addition of binding protein to in vitro systems. The algorithm is tested by application to a published data set from an experimental study in an in vitro system (Lim et al., 1990, Endocrinology 127, 1287-1291). Predicted changes show good agreement (within 8%) with experimental observations. (C) 1998 Academic Press Limited.
Resumo:
Background/Aims: Liver clearance models are based on information (or assumptions) on solute distribution kinetics within the microvasculatory system, The aim was to study albumin distribution kinetics in regenerated livers and in livers of normal adult rats, Methods: A novel mathematical model was used to evaluate the distribution space and the transit time dispersion of albumin in livers following regeneration after a two-thirds hepatectomy compared to livers of normal adult rats. Outflow curves of albumin measured after bolus injection in single-pass perfused rat livers were analyzed by correcting for the influence of catheters and fitting a long-tailed function to the data. Results: The curves were well described by the proposed model. The distribution volume and the transit time dispersion of albumin observed in the partial hepatectomy group were not significantly different from livers of normal adult rats. Conclusions: These findings suggest that the distribution space and the transit time dispersion of albumin (CV2) is relatively constant irrespective of the presence of rapid and extensive repair. This invariance of CV2 implies, as a first approximation, a similar degree of intrasinusoidal mixing, The finding that a sum of two (instead of one) inverse Gaussian densities is an appropriate empirical function to describe the outflow curve of vascular indicators has consequences for an improved prediction of hepatic solute extraction.
Resumo:
The convection-dispersion model and its extended form have been used to describe solute disposition in organs and to predict hepatic availabilities. A range of empirical transit-time density functions has also been used for a similar purpose. The use of the dispersion model with mixed boundary conditions and transit-time density functions has been queried recently by Hisaka and Sugiyanaa in this journal. We suggest that, consistent with soil science and chemical engineering literature, the mixed boundary conditions are appropriate providing concentrations are defined in terms of flux to ensure continuity at the boundaries and mass balance. It is suggested that the use of the inverse Gaussian or other functions as empirical transit-time densities is independent of any boundary condition consideration. The mixed boundary condition solutions of the convection-dispersion model are the easiest to use when linear kinetics applies. In contrast, the closed conditions are easier to apply in a numerical analysis of nonlinear disposition of solutes in organs. We therefore argue that the use of hepatic elimination models should be based on pragmatic considerations, giving emphasis to using the simplest or easiest solution that will give a sufficiently accurate prediction of hepatic pharmacokinetics for a particular application. (C) 2000 Wiley-Liss Inc. and the American Pharmaceutical Association J Pharm Sci 89:1579-1586, 2000.
Resumo:
An equivalent algorithm is proposed to simulate thermal effects of the magma intrusion in geological systems, which are composed of porous rocks. Based on the physical and mathematical equivalence, the original magma solidification problem with a moving boundary between the rock and intruded magma is transformed into a new problem without the moving boundary but with a physically equivalent heat source. From the analysis of an ideal solidification model, the physically equivalent heat source has been determined in this paper. The major advantage in using the proposed equivalent algorithm is that the fixed finite element mesh with a variable integration time step can be employed to simulate the thermal effect of the intruded magma solidification using the conventional finite element method. The related numerical results have demonstrated the correctness and usefulness of the proposed equivalent algorithm for simulating the thermal effect of the intruded magma solidification in geological systems. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
We present a technique for team design based on cognitive work analysis (CWA). We first develop a rationale for this technique by discussing the limitations of conventional approaches for team design in light of the special characteristics of first-of-a-kind, complex systems. We then introduce the CWA-based technique for team design and provide a case study of how we used this technique to design a team for a first-of-a-kind, complex military system during the early stages of its development. In addition to illustrating the CWA-based technique by example, the case study allows us to evaluate the technique. This case study demonstrates that the CWA-based technique for team design is both feasible and useful, although empirical validation of the technique is still necessary. Applications of this work include the design of teams for first-of-a-kind, complex systems in military, medical, and industrial domains.
Resumo:
The aim of this work was to exemplify the specific contribution of both two- and three-dimensional (31)) X-ray computed tomography to characterise earthworm burrow systems. To achieve this purpose we used 3D mathematical morphology operators to characterise burrow systems resulting from the activity of an anecic (Aporrectodea noctunia), and an endogeic species (Allolobophora chlorotica), when both species were introduced either separately or together into artificial soil cores. Images of these soil cores were obtained using a medical X-ray tomography scanner. Three-dimensional reconstructions of burrow systems were obtained using a specifically developed segmentation algorithm. To study the differences between burrow systems, a set of classical tools of mathematical morphology (granulometries) were used. So-called granulometries based on different structuring elements clearly separated the different burrow systems. They enabled us to show that burrows made by the anecic species were fatter, longer, more vertical, more continuous but less sinuous than burrows of the endogeic species. The granulometry transform of the soil matrix showed that burrows made by A. nocturna were more evenly distributed than those of A. chlorotica. Although a good discrimination was possible when only one species was introduced into the soil cores, it was not possible to separate burrows of the two species from each other in cases where species were introduced into the same soil core. This limitation, partly due to the insufficient spatial resolution of the medical scanner, precluded the use of the morphological operators to study putative interactions between the two species.
Resumo:
Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.
Resumo:
The classification rules of linear discriminant analysis are defined by the true mean vectors and the common covariance matrix of the populations from which the data come. Because these true parameters are generally unknown, they are commonly estimated by the sample mean vector and covariance matrix of the data in a training sample randomly drawn from each population. However, these sample statistics are notoriously susceptible to contamination by outliers, a problem compounded by the fact that the outliers may be invisible to conventional diagnostics. High-breakdown estimation is a procedure designed to remove this cause for concern by producing estimates that are immune to serious distortion by a minority of outliers, regardless of their severity. In this article we motivate and develop a high-breakdown criterion for linear discriminant analysis and give an algorithm for its implementation. The procedure is intended to supplement rather than replace the usual sample-moment methodology of discriminant analysis either by providing indications that the dataset is not seriously affected by outliers (supporting the usual analysis) or by identifying apparently aberrant points and giving resistant estimators that are not affected by them.
Resumo:
The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.
Resumo:
There has been little study of economic and general attitudes towards the conservation of the Asian elephant. This paper reports and analyses results from surveys conducted in Sri Lanka of attitudes of urban dwellers and farmers towards nature conservation in general and the elephant conservation in particular. The analyses are based on urban and a rural sample. Contingent valuation techniques are used as survey instruments. Multivariate logit regression analysis is used to analyze the respondents’ attitudes towards conservation of elephants. It is found that, although some variations occurred between the samples, the majority of the respondents (both rural and urban) have positive attitudes towards nature conservation in general. However, marked differences in attitudes toward elephant conservation are evident between these two samples: the majority of urban respondents were in favour of elephant conservation; rural respondents expressed a mixture of positive and negative attitudes. Overall, considerable unrecorded and as yet unutilised economic support for conservation of wild elephants exists in Sri Lanka.