933 resultados para Approximation Classes
Resumo:
In order to overcome divergence of estimation with the same data, the proposed digital costing process adopts an integrated design of information system to design the process knowledge and costing system together. By employing and extending a widely used international standard, industry foundation classes, the system can provide an integrated process which can harvest information and knowledge of current quantity surveying practice of costing method and data. Knowledge of quantification is encoded from literatures, motivation case and standards. It can reduce the time consumption of current manual practice. The further development will represent the pricing process in a Bayesian Network based knowledge representation approach. The hybrid types of knowledge representation can produce a reliable estimation for construction project. In a practical term, the knowledge management of quantity surveying can improve the system of construction estimation. The theoretical significance of this study lies in the fact that its content and conclusion make it possible to develop an automatic estimation system based on hybrid knowledge representation approach.
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
In this paper an equation is derived for the mean backscatter cross section of an ensemble of snowflakes at centimeter and millimeter wavelengths. It uses the Rayleigh–Gans approximation, which has previously been found to be applicable at these wavelengths due to the low density of snow aggregates. Although the internal structure of an individual snowflake is random and unpredictable, the authors find from simulations of the aggregation process that their structure is “self-similar” and can be described by a power law. This enables an analytic expression to be derived for the backscatter cross section of an ensemble of particles as a function of their maximum dimension in the direction of propagation of the radiation, the volume of ice they contain, a variable describing their mean shape, and two variables describing the shape of the power spectrum. The exponent of the power law is found to be −. In the case of 1-cm snowflakes observed by a 3.2-mm-wavelength radar, the backscatter is 40–100 times larger than that of a homogeneous ice–air spheroid with the same mass, size, and aspect ratio.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.
Resumo:
The viability of two different classes of Lambda(t)CDM cosmologies is tested by using the APM 08279+5255, an old quasar at redshift z = 3.91. In the first class of models, the cosmological term scales as Lambda(t) similar to R(-n). The particular case n = 0 describes the standard Lambda CDM model whereas n = 2 stands for the Chen and Wu model. For an estimated age of 2 Gyr, it is found that the power index has a lower limit n > 0.21, whereas for 3 Gyr the limit is n > 0.6. Since n can not be so large as similar to 0.81, the Lambda CDM and Chen and Wu models are also ruled out by this analysis. The second class of models is the one recently proposed by Wang and Meng which describes several Lambda(t)CDM cosmologies discussed in the literature. By assuming that the true age is 2 Gyr it is found that the epsilon parameter satisfies the lower bound epsilon > 0.11 while for 3 Gyr, a lower limit of epsilon > 0.52 is obtained. Such limits are slightly modified when the baryonic component is included.
Resumo:
In contrast to the many studies on the venoms of scorpions, spiders, snakes and cone snails, tip to now there has been no report of the proteomic analysis of sea anemones venoms. In this work we report for the first time the peptide mass fingerprint and some novel peptides in the neurotoxic fraction (Fr III) of the sea anemone Bunodosoma cangicum venom. Fr III is neurotoxic to crabs and was purified by rp-HPLC in a C-18 column, yielding 41 fractions. By checking their molecular masses by ESI-Q-Tof and MALDI-Tof MS we found 81 components ranging from near 250 amu to approximately 6000 amu. Some of the peptidic molecules were partially sequenced through the automated Edman technique. Three of them are peptides with near 4500 amu belonging to the class of the BcIV, BDS-I, BDS-II, APETx1, APETx2 and Am-II toxins. Another three peptides represent a novel group of toxins (similar to 3200 amu). A further three molecules (similar to similar to 4900 amu) belong to the group of type 1 sodium channel neurotoxins. When assayed over the crab leg nerve compound action potentials, one of the BcIV- and APETx-like peptides exhibits an action similar to the type 1 sodium channel toxins in this preparation, suggesting the same target in this assay. On the other hand one of the novel peptides, with 3176 amu, displayed an action similar to potassium channel blockage in this experiment. In summary, the proteomic analysis and mass fingerprint of fractions from sea anemone venoms through MS are valuable tools, allowing us to rapidly predict the occurrence of different groups of toxins and facilitating the search and characterization of novel molecules without the need of full characterization of individual components by broader assays and bioassay-guided purifications. It also shows that sea anemones employ dozens of components for prey capture and defense. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
There is an increasing interest in the application of Evolutionary Algorithms (EAs) to induce classification rules. This hybrid approach can benefit areas where classical methods for rule induction have not been very successful. One example is the induction of classification rules in imbalanced domains. Imbalanced data occur when one or more classes heavily outnumber other classes. Frequently, classical machine learning (ML) classifiers are not able to learn in the presence of imbalanced data sets, inducing classification models that always predict the most numerous classes. In this work, we propose a novel hybrid approach to deal with this problem. We create several balanced data sets with all minority class cases and a random sample of majority class cases. These balanced data sets are fed to classical ML systems that produce rule sets. The rule sets are combined creating a pool of rules and an EA is used to build a classifier from this pool of rules. This hybrid approach has some advantages over undersampling, since it reduces the amount of discarded information, and some advantages over oversampling, since it avoids overfitting. The proposed approach was experimentally analysed and the experimental results show an improvement in the classification performance measured as the area under the receiver operating characteristics (ROC) curve.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.
Resumo:
Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
We study the validity of the Born-Oppenheimer approximation in chaotic dynamics. Using numerical solutions of autonomous Fermi accelerators. we show that the general adiabatic conditions can be interpreted as the narrowness of the chaotic region in phase space. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We apply a self-energy-corrected local density approximation (LDA) to obtain corrected bulk band gaps and to study the band offsets of AlAs grown on GaAs (AlAs/GaAs). We also investigate the Al(x)Ga(1-x)As/GaAs alloy interface, commonly employed in band gap engineering. The calculations are fully ab initio, with no adjustable parameters or experimental input, and at a computational cost comparable to traditional LDA. Our results are in good agreement with experimental values and other theoretical studies. Copyright (C) EPLA, 2011
Resumo:
We observe experimentally a deviation of the radius of a Bose-Einstein condensate from the standard Thomas-Fermi prediction, after free expansion, as a function of temperature. A modified Hartree-Fock model is used to explain the observations, mainly based on the influence of the thermal cloud on the condensate cloud.
Resumo:
The nonequilibrium phase transition of the one-dimensional triplet-creation model is investigated using the n-site approximation scheme. We find that the phase diagram in the space of parameters (gamma, D), where gamma is the particle decay probability and D is the diffusion probability, exhibits a tricritical point for n >= 4. However, the fitting of the tricritical coordinates (gamma(t), D(t)) using data for 4 <= n <= 13 predicts that gamma(t) becomes negative for n >= 26, indicating thus that the phase transition is always continuous in the limit n -> infinity. However, the large discrepancies between the critical parameters obtained in this limit and those obtained by Monte Carlo simulations, as well as a puzzling non-monotonic dependence of these parameters on the order of the approximation n, argue for the inadequacy of the n-site approximation to study the triplet-creation model for computationally feasible values of n.
Resumo:
Using Heavy Quark Effective Theory with non-perturbatively determined parameters in a quenched lattice calculation, we evaluate the splittings between the ground state and the first two radially excited states of the B(s) system at static order. We also determine the splitting between first excited and ground state, and between the B(s)* and B(s) ground states to order 1/m(b). The Generalized Eigenvalue Problem and the use of all-to-all propagators are important ingredients of our approach.