978 resultados para size-extensivity error


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of the research presented in this work is to provide some important insights about computational modeling of open-shell species. Such projects are: the investigation of the size-extensivity error in Equation-of-Motion Coupled Cluster methods, the analysis of the Long-Range corrected scheme in predicting UV-Vis spectra of Cu(II) complexes with the 4-imidazole acetate and its ethylated derivative, and the exploration of the importance of choosing a proper basis set for the description of systems such as the lithium monoxide anion. The most significant findings of this research are: (i) The contribution of the left operator to the size-extensivity error of the CR-EOMCC(2,3) approach, (ii) The cause of d-d shifts when varying the range-separation parameter and the amount of the exact exchange arising from the imbalanced treatment of localized vs. delocalized orbitals via the "tuned" CAM-B3LYP* functional, (iii) The proper acidity trend of the first-row hydrides and their lithiated analogs that may be reversed if the basis sets are not correctly selected.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phytoplankton are the basis of marine food webs, and affect biogeochemical cycles. As CO2 levels increase, shifts in the frequencies and physiology of ecotypes within phytoplankton groups will affect their nutritional value and biogeochemical function. However, studies so far are based on a few representative genotypes from key species. Here, we measure changes in cellular function and growth rate at atmospheric CO2 concentrations predicted for the year 2100 in 16 ecotypes of the marine picoplankton Ostreococcus. We find that variation in plastic responses among ecotypes is on par with published between-genera variation, so the responses of one or a few ecotypes cannot estimate changes to the physiology or composition of a species under CO2 enrichment. We show that ecotypes best at taking advantage of CO2 enrichment by changing their photosynthesis rates most should increase in relative fitness, and so in frequency in a high-CO2 environment. Finally, information on sampling location, and not phylogenetic relatedness, is a good predictor of ecotypes likely to increase in frequency in this system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prior to the development of a production standard control system for ML Aviation's plan-symmetric remotely piloted helicopter system, SPRITE, optimum solutions to technical requirements had yet to be found for some aspects of the work. This thesis describes an industrial project where solutions to real problems have been provided within strict timescale constraints. Use has been made of published material wherever appropriate, new solutions have been contributed where none existed previously. A lack of clearly defined user requirements from potential Remotely Piloted Air Vehicle (RPAV) system users is identified, A simulation package is defined to enable the RPAV designer to progress with air vehicle and control system design, development and evaluation studies and to assist the user to investigate his applications. The theoretical basis of this simulation package is developed including Co-axial Contra-rotating Twin Rotor (CCTR), six degrees of freedom motion, fuselage aerodynamics and sensor and control system models. A compatible system of equations is derived for modelling a miniature plan-symmetric helicopter. Rigorous searches revealed a lack of CCTR models, based on closed form expressions to obviate integration along the rotor blade, for stabilisation and navigation studies through simulation. An economic CCTR simulation model is developed and validated by comparison with published work and practical tests. Confusion in published work between attitude and Euler angles is clarified. The implementation of package is discussed. dynamic adjustment of assessment. the theory into a high integrity software Use is made of a novel technique basing the integration time step size on error Simulation output for control system stability verification, cross coupling of motion between control channels and air vehicle response to demands and horizontal wind gusts studies are presented. Contra-Rotating Twin Rotor Flight Control System Remotely Piloted Plan-Symmetric Helicopter Simulation Six Degrees of Freedom Motion ( i i)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bubble size in a gas liquid ejector has been measured using the image technique and analysed for estimation of Sauter mean diameter. The individual bubble diameter is estimated by considering the two dimensional contour of the ellipse, for the actual three dimensional ellipsoid in the system by equating the volume of the ellipsoid to that of the sphere. It is observed that the bubbles are of oblate and prolate shaped ellipsoid in this air water system. The bubble diameter is calculated based on this concept and the Sauter mean diameter is estimated. The error between these considerations is reported. The bubble size at different locations from the nozzle of the ejector is presented along with their percentage error which is around 18%.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We study the evolution of a finite size population formed by mutationally isolated lineages of error-prone replicators in a two-peak fitness landscape. Computer simulations are performed to gain a stochastic description of the system dynamics. More specifically, for different population sizes, we compute the probability of each lineage being selected in terms of their mutation rates and the amplification factors of the fittest phenotypes. We interpret the results as the compromise between the characteristic time a lineage takes to reach its fittest phenotype by crossing the neutral valley and the selective value of the sequences that form the lineages. A main conclusion is drawn: for finite population sizes, the survival probability of the lineage that arrives first to the fittest phenotype rises significantly

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The efficacy of a specially constructed Gallager-type error-correcting code to communication in a Gaussian channel is examined. The construction is based on the introduction of complex matrices, used in both encoding and decoding, which comprise sub-matrices of cascading connection values. The finite-size effects are estimated for comparing the results with the bounds set by Shannon. The critical noise level achieved for certain code rates and infinitely large systems nearly saturates the bounds set by Shannon even when the connectivity used is low.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Few studies have specifically investigated the functional effects of uncorrected astigmatism on measures of reading fluency. This information is important to provide evidence for the development of clinical guidelines for the correction of astigmatism. Methods: Participants included 30 visually normal, young adults (mean age 21.7 ± 3.4 years). Distance and near visual acuity and reading fluency were assessed with optimal spectacle correction (baseline) and for two levels of astigmatism, 1.00DC and 2.00DC, at two axes (90° and 180°) to induce both against-the-rule (ATR) and with-the-rule (WTR) astigmatism. Reading and eye movement fluency were assessed using standardized clinical measures including the test of Discrete Reading Rate (DRR), the Developmental Eye Movement (DEM) test and by recording eye movement patterns with the Visagraph (III) during reading for comprehension. Results: Both distance and near acuity were significantly decreased compared to baseline for all of the astigmatic lens conditions (p < 0.001). Reading speed with the DRR for N16 print size was significantly reduced for the 2.00DC ATR condition (a reduction of 10%), while for smaller text sizes reading speed was reduced by up to 24% for the 1.00DC ATR and 2.00DC condition in both axis directions (p<0.05). For the DEM, sub-test completion speeds were significantly impaired, with the 2.00DC condition affecting both vertical and horizontal times and the 1.00DC ATR condition affecting only horizontal times (p<0.05). Visagraph reading eye movements were not significantly affected by the induced astigmatism. Conclusions: Induced astigmatism impaired performance on selected tests of reading fluency, with ATR astigmatism having significantly greater effects on performance than did WTR, even for relatively small amounts of astigmatic blur of 1.00DC. These findings have implications for the minimal prescribing criteria for astigmatic refractive errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Situation awareness lost is a common factor leading to human error in the aviation industry. However, few studies have investigated the effect on situation awareness where the control interface is a touch-screen device that supports simultaneous multi-touch input and information output. This research aims to conduct an experiment to evaluate the difference in situation awareness on a large screen device, DiamondTouch (DT107), and a small screen device, iPad, both with multi-touch interactive functions. The Interface Operation and Situation Awareness Testing Simulator (IOSATS), is a simulator to test the three basis interface operations (Search Target, Information Reading, and Change Detection) by implementing a simplified search and rescue scenario. The result of this experiment will provide reliable data for future research for improving operator's situation awareness in the avionic domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.