962 resultados para Maximum entropy method
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
We introduce a novel way of measuring the entropy of a set of values undergoing changes. Such a measure becomes useful when analyzing the temporal development of an algorithm designed to numerically update a collection of values such as artificial neural network weights undergoing adjustments during learning. We measure the entropy as a function of the phase-space of the values, i.e. their magnitude and velocity of change, using a method based on the abstract measure of entropy introduced by the philosopher Rudolf Carnap. By constructing a time-dynamic two-dimensional Voronoi diagram using Voronoi cell generators with coordinates of value- and value-velocity (change of magnitude), the entropy becomes a function of the cell areas. We term this measure teleonomic entropy since it can be used to describe changes in any end-directed (teleonomic) system. The usefulness of the method is illustrated when comparing the different approaches of two search algorithms, a learning artificial neural network and a population of discovering agents. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Distortional buckling, unlike the usual lateral-torsional buckling in which the cross-section remains rigid in its own plane, involves distortion of web in the cross-section. This type of buckling typically occurs in beams with slender web and stocky flanges. Most of the published studies assume the web to deform with a cubic shape function. As this assumption may limit the accuracy of the results, a fifth order polynomial is chosen here for the web displacements. The general line-type finite element model used here has two nodes and a maximum of twelve degrees of freedom per node. The model not only can predict the correct coupled mode but also is capable of handling the local buckling of the web.
Resumo:
Adsorption of supercritical fluids is increasingly carried out to determine the micropore size distribution. This is largely motivated by the advances in the use of supercritical adsorption in high energy applications, such as hydrogen and methane storage in porous media. Experimental data are reported as mass excess versus pressure, and when these data are matched against the theoretical mass excess, significant errors could occur if the void volume used in the calculation of the experimental mass excess is incorrectly determined [Malbrunot, P.; Vidal, D.; Vermesse, J.; Chahine, R.; Bose, T. K. Langmuir 1997, 13, 539]. 1 The incorrect value for the void volume leads to a wrong description of the maximum in the plot of mass excess versus pressure as well as the part of the isotherm over the pressure region where the isotherm is decreasing. Because of this uncertainty in the maximum and the decreasing part of the isotherm, we propose a new method in which the problems associated with this are completely avoided. Our method involves only the relationship between the amount that is introduced into the adsorption cell and the equilibrium pressure. This information of direct experimental data has two distinct advantages. The first is that the data is the raw data without any manipulation (i.e., involving further calculations), and the second one is that this relationship always monotonically increases with pressure. We will illustrate this new method with the adsorption data of methane in a commercial sample of activated carbon.
Resumo:
Temperature is an important parameter controlling protein crystal growth. A new temperature-screening system (Thermo-screen) is described consisting of a gradient thermocycler fitted with a special crystallization-plate adapter onto which a 192-well sitting-drop crystallization plate can be mounted (temperature range 277-372 K; maximum temperature gradient 20 K; interval precision 0.3 K). The system allows 16 different conditions to be monitored simultaneously over a range of 12 temperatures and is well suited to conduct wide (similar to 20 K) and fine (similar to 3 K) temperature-optimization screens. It can potentially aid in the determination of temperature phase diagrams and run more complex temperature-cycling experiments for seeding and crystal growth.
Resumo:
In this paper we propose a fast adaptive Importance Sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First we estimate the minimum Cross-Entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level; finally, the tilting parameter just found is used to estimate the overflow probability of interest. We recognize three distinct properties of the method which together explain why the method works well; we conjecture that they hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
There has been much recent research into extracting useful diagnostic features from the electrocardiogram with numerous studies claiming impressive results. However, the robustness and consistency of the methods employed in these studies is rarely, if ever, mentioned. Hence, we propose two new methods; a biologically motivated time series derived from consecutive P-wave durations, and a mathematically motivated regularity measure. We investigate the robustness of these two methods when compared with current corresponding methods. We find that the new time series performs admirably as a compliment to the current method and the new regularity measure consistently outperforms the current measure in numerous tests on real and synthetic data.
Resumo:
The thesis presents new methodology and algorithms that can be used to analyse and measure the hand tremor and fatigue of surgeons while performing surgery. This will assist them in deriving useful information about their fatigue levels, and make them aware of the changes in their tool point accuracies. This thesis proposes that muscular changes of surgeons, which occur through a day of operating, can be monitored using Electromyography (EMG) signals. The multi-channel EMG signals are measured at different muscles in the upper arm of surgeons. The dependence of EMG signals has been examined to test the hypothesis that EMG signals are coupled with and dependent on each other. The results demonstrated that EMG signals collected from different channels while mimicking an operating posture are independent. Consequently, single channel fatigue analysis has been performed. In measuring hand tremor, a new method for determining the maximum tremor amplitude using Principal Component Analysis (PCA) and a new technique to detrend acceleration signals using Empirical Mode Decomposition algorithm were introduced. This tremor determination method is more representative for surgeons and it is suggested as an alternative fatigue measure. This was combined with the complexity analysis method, and applied to surgically captured data to determine if operating has an effect on a surgeon’s fatigue and tremor levels. It was found that surgical tremor and fatigue are developed throughout a day of operating and that this could be determined based solely on their initial values. Finally, several Nonlinear AutoRegressive with eXogenous inputs (NARX) neural networks were evaluated. The results suggest that it is possible to monitor surgeon tremor variations during surgery from their EMG fatigue measurements.
Resumo:
The DNA binding fusion protein, LacI-His6-GFP, together with the conjugate PEG-IDA-Cu(II) (10 kDa) was evaluated as a dual affinity system for the pUC19 plasmid extraction from an alkaline bacterial cell lysate in poly(ethylene glycol) (PEG)/dextran (DEX) aqueous two-phase systems (ATPS). In a PEG 600-DEX 40 ATPS containing 0.273 nmol of LacI fusion protein and 0.14% (w/w) of the functionalised PEG-IDA-Cu(II), more than 72% of the plasmid DNA partitioned to the PEG phase, without RNA or genomic DNA contamination as evaluated by agarose gel electrophoresis. In a second extraction stage, the elution of pDNA from the LacI binding complex proved difficult using either dextran or phosphate buffer as second phase, though more than 75% of the overall protein was removed in both systems. A maximum recovery of approximately 27% of the pCU19 plasmid was achieved using the PEG-dextran system as a second extraction system, with 80-90% of pDNA partitioning to the bottom phase. This represents about 7.4 microg of pDNA extracted per 1 mL of pUC19 desalted lysate.
Resumo:
Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP.
Resumo:
Neutron diffraction was used to measure the total structure factors for several rare-earth ion R3+ (La3+ or Ce3+) phosphate glasses with composition close to RAl0.35P3.24O10.12. By assuming isomorphic structures, difference function methods were employed to separate, essentially, those correlations involving R3+ from the remainder. A self-consistent model of the glass structure was thereby developed in which the Al correlations were taken into explicit account. The glass network was found to be made from interlinked PO4 tetrahedra having 2.2(1) terminal oxygen atoms, OT, at 1.51(1) Angstrom, and 1.8(1) bridging oxygen atoms, OB, at 1.60(1) Angstrom. Rare-earth cations bonded to an average of 7.5(2) OT nearest neighbors in a broad and asymmetric distribution. The Al3+ ion acted as a network modifier and formed OT-A1-OT linkages that helped strengthen the glass. The connectivity of the R-centered coordination polyhedra was quantified in terms of a parameter f(s) and used to develop a model for the dependence on composition of the A1-OT coordination number in R-A1-P-O glasses. By using recent 17 A1 nuclear-magnetic-resonance data, it was shown that this connectivity decreases monotonically with increasing Al content. The chemical durability of the glasses appeared to be at a maximum when the connectivity of the R-centered coordination polyhedra was at a minimum. The relation of f(s) to the glass transition temperature, Tg, was discussed.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92D10, 92D30, 94A17, 62L10.
Resumo:
In this paper, we focus on the design of bivariate EDAs for discrete optimization problems and propose a new approach named HSMIEC. While the current EDAs require much time in the statistical learning process as the relationships among the variables are too complicated, we employ the Selfish gene theory (SG) in this approach, as well as a Mutual Information and Entropy based Cluster (MIEC) model is also set to optimize the probability distribution of the virtual population. This model uses a hybrid sampling method by considering both the clustering accuracy and clustering diversity and an incremental learning and resample scheme is also set to optimize the parameters of the correlations of the variables. Compared with several benchmark problems, our experimental results demonstrate that HSMIEC often performs better than some other EDAs, such as BMDA, COMIT, MIMIC and ECGA. © 2009 Elsevier B.V. All rights reserved.
Resumo:
Historically, grapevine (Vitis vinifera L.) leaf characterisation has been a driving force in the identification of cultivars. In this study, ampelometric (foliometric) analysis was done on leaf samples collected from hand-pruned, mechanically pruned and minimally pruned ‘Sauvignon blanc’ and ‘Syrah’ vines to estimate the impact of within-vineyard variability and a change in bud load on the stability of leaf properties. The results showed that within-vineyard variability of ampelometric characteristics was high within a cultivar, irrespective of bud load. In terms of the O.I.V. coding system, zero to four class differences were observed between minimum and maximum values of each characteristic. The value of variability of each characteristic was different between the three levels of bud load and the two cultivars. With respect to bud load, the number of shoots per vine had a significant effect on the characteristics of the leaf laminae. Single leaf area and lengths of veins changed significantly for both cultivars, irrespective of treatment, while angle between veins proved to be a stable characteristic. A large number of biometric data can be recorded on a single leaf; the data measured on several leaves, however, are not necessarily unique for a specific cultivar. The leaf characteristics analysed in this study can be divided into two groups according to the response to a change in bud load, i.e. stable (angles between the veins, depths of sinuses) and variable (length of the veins, length of the petiole, single leaf area). The variable characteristics are not recommended to be used in cultivar identification, unless the pruning method/bud load is known.