993 resultados para Measurements models
Resumo:
A set of 43 sediment cores from around the Canary Islands is used to characterise this region, which intersects meridional climatic regimes and zonal productivity gradients in a high spatial resolution. Using rapid and nondestructive core logging techniques we carried out Fe intensity and magnetic susceptibility (MS) measurements and created a stack on the basis of five stratigraphic reference cores, for which a stratigraphic age model was available from d18O and 14C analyses on planktonic foraminifera. By correlation of the stack with the Fe and MS records of the other cores, we were able to develop age depth models at all investigated sites of the region. We present the bulk sediment accumulation rates (AR) of the Canary Islands region as an indicator of shifts in the upwelling-influenced areas for the Holocene (0-12 ky), the deglaciation (12-18 ky) and the last glacial (18-40 ky). General observations are an enhanced productivity during glacial times with highest values during the deglaciation. The main differences between the analysed time intervals we interpret as result of the sea-level effects, changes in the extent of high productivity areas, and current intensity.
Resumo:
Bibliography: p. 73-74.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A stress-wave force balance for measurement of thrust, lift, and pitching moment on a large scramjet model (40 kg in mass, 1.165 in in length) in a reflected shock tunnel has been designed, calibrated, and tested. Transient finite element analysis was used to model the performance of the balance. This modeling indicates that good decoupling of signals and low sensitivity of the balance to the distribution of. the load can be achieved with a three-bar balance. The balance was constructed and calibrated by applying a series of point loads to the model. A good comparison between finite element analysis and experimental results was obtained with finite element analysis aiding in the interpretation of some experimental results. Force measurements were made in a shock tunnel both with and without fuel injection, and measurements were compared with predictions using simple models of the scramjet and combustion. Results indicate that the balance is capable of resolving lift, thrust, and pitching moments with and without combustion. However vibrations associated with tunnel operation interfered with the signals indicating the importance of vibration isolation for accurate measurements.
Resumo:
Pulse oximetry is commonly used as an arterial blood oxygen saturation (SaO(2)) measure. However, its other serial output, the photoplethysmography (PPG) signal, is not as well studied. Raw PPG signals can be used to estimate cardiovascular measures like pulse transit time (PTT) and possibly heart rate (HR). These timing-related measurements are heavily dependent on the minimal variability in phase delay of the PPG signals. Masimo SET (R) Rad-9 (TM) and Novametrix Oxypleth oximeters were investigated for their PPG phase characteristics on nine healthy adults. To facilitate comparison, PPG signals were acquired from fingers on the same hand in a random fashion. Results showed that mean PTT variations acquired from the Masimo oximeter (37.89 ms) were much greater than the Novametrix (5.66 ms). Documented evidence suggests that I ms variation in PTT is equivalent to I mmHg change in blood pressure. Moreover, the PTT trend derived from the Masimo oximeter can be mistaken as obstructive sleep apnoeas based on the known criteria. HR comparison was evaluated against estimates attained from an electrocardiogram (ECG). Novametrix differed from ECG by 0.71 +/- 0.58% (p < 0.05) while Masimo differed by 4.51 +/- 3.66% (p > 0.05). Modem oximeters can be attractive for their improved SaO(2) measurement. However, using raw PPG signals obtained directly from these oximeters for timing-related measurements warrants further investigations.
Resumo:
This paper reports on the development of an artificial neural network (ANN) method to detect laminar defects following the pattern matching approach utilizing dynamic measurement. Although structural health monitoring (SHM) using ANN has attracted much attention in the last decade, the problem of how to select the optimal class of ANN models has not been investigated in great depth. It turns out that the lack of a rigorous ANN design methodology is one of the main reasons for the delay in the successful application of the promising technique in SHM. In this paper, a Bayesian method is applied in the selection of the optimal class of ANN models for a given set of input/target training data. The ANN design method is demonstrated for the case of the detection and characterisation of laminar defects in carbon fibre-reinforced beams using flexural vibration data for beams with and without non-symmetric delamination damage.
Resumo:
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. Generalisation is measured by the performance on independent test data drawn from the same distribution as the training data. Such performance can be quantified by the posterior average of the information divergence between the true and the model distributions. Averaging over the Bayesian posterior guarantees internal coherence; Using information divergence guarantees invariance with respect to representation. The theory generalises the least mean squares theory for linear Gaussian models to general problems of statistical estimation. The main results are: (1)~the ideal optimal estimate is always given by average over the posterior; (2)~the optimal estimate within a computational model is given by the projection of the ideal estimate to the model. This incidentally shows some currently popular methods dealing with hyperpriors are in general unnecessary and misleading. The extension of information divergence to positive normalisable measures reveals a remarkable relation between the dlt dual affine geometry of statistical manifolds and the geometry of the dual pair of Banach spaces Ld and Ldd. It therefore offers conceptual simplification to information geometry. The general conclusion on the issue of evaluating neural network learning rules and other statistical inference methods is that such evaluations are only meaningful under three assumptions: The prior P(p), describing the environment of all the problems; the divergence Dd, specifying the requirement of the task; and the model Q, specifying available computing resources.
Resumo:
Neural networks are statistical models and learning rules are estimators. In this paper a theory for measuring generalisation is developed by combining Bayesian decision theory with information geometry. The performance of an estimator is measured by the information divergence between the true distribution and the estimate, averaged over the Bayesian posterior. This unifies the majority of error measures currently in use. The optimal estimators also reveal some intricate interrelationships among information geometry, Banach spaces and sufficient statistics.
Resumo:
The problem of evaluating different learning rules and other statistical estimators is analysed. A new general theory of statistical inference is developed by combining Bayesian decision theory with information geometry. It is coherent and invariant. For each sample a unique ideal estimate exists and is given by an average over the posterior. An optimal estimate within a model is given by a projection of the ideal estimate. The ideal estimate is a sufficient statistic of the posterior, so practical learning rules are functions of the ideal estimator. If the sole purpose of learning is to extract information from the data, the learning rule must also approximate the ideal estimator. This framework is applicable to both Bayesian and non-Bayesian methods, with arbitrary statistical models, and to supervised, unsupervised and reinforcement learning schemes.
Resumo:
The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
Resumo:
The papers is dedicated to the questions of modeling and basing super-resolution measuring- calculating systems in the context of the conception “device + PC = new possibilities”. By the authors of the article the new mathematical method of solution of the multi-criteria optimization problems was developed. The method is based on physic-mathematical formalism of reduction of fuzzy disfigured measurements. It is shown, that determinative part is played by mathematical properties of physical models of the object, which is measured, surroundings, measuring components of measuring-calculating systems and theirs cooperation as well as the developed mathematical method of processing and interpretation of measurements problem solution.
Resumo:
Prognostic procedures can be based on ranked linear models. Ranked regression type models are designed on the basis of feature vectors combined with set of relations defined on selected pairs of these vectors. Feature vectors are composed of numerical results of measurements on particular objects or events. Ranked relations defined on selected pairs of feature vectors represent additional knowledge and can reflect experts' opinion about considered objects. Ranked models have the form of linear transformations of feature vectors on a line which preserve a given set of relations in the best manner possible. Ranked models can be designed through the minimization of a special type of convex and piecewise linear (CPL) criterion functions. Some sets of ranked relations cannot be well represented by one ranked model. Decomposition of global model into a family of local ranked models could improve representation. A procedures of ranked models decomposition is described in this paper.
Resumo:
Computer networks are a critical factor for the performance of a modern company. Managing networks is as important as managing any other aspect of the company’s performance and security. There are many tools and appliances for monitoring the traffic and analyzing the network flow security. They use different approaches and rely on a variety of characteristics of the network flows. Network researchers are still working on a common approach for security baselining that might enable early watch alerts. This research focuses on the network security models, particularly the Denial-of-Services (DoS) attacks mitigation, based on a network flow analysis using the flows measurements and the theory of Markov models. The content of the paper comprises the essentials of the author’s doctoral thesis.