958 resultados para Calibration curve


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The appealing feature of the arbitrage-free Nelson-Siegel model of the yield curve is the ability to capture movements in the yield curve through readily interpretable shifts in its level, slope or curvature, all within a dynamic arbitrage-free framework. To ensure that the level, slope and curvature factors evolve so as not to admit arbitrage, the model introduces a yield-adjustment term. This paper shows how the yield-adjustment term can also be decomposed into the familiar level, slope and curvature elements plus some additional readily interpretable shape adjustments. This means that, even in an arbitrage-free setting, it continues to be possible to interpret movements in the yield curve in terms of level, slope and curvature influences. © 2014 © 2014 Taylor & Francis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study examined the effect of range of a confidence scale on consumer knowledge calibration, specifically whether a restricted range scale (25%- 100%) leads to difference in calibration compared to a full range scale (0%-100%), for multiple-choice questions. A quasi-experimental study using student participants (N = 434) was employed. Data were collected from two samples; in the first sample (N = 167) a full range confidence scale was used, and in the second sample (N = 267) a restricted range scale was used. No differences were found between the two scales on knowledge calibration. Results from studies of knowledge calibration employing restricted range and full range confidence scales are thus comparable. © Psychological Reports 2014.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

∗ This research is partially supported by the Bulgarian National Science Fund under contract MM-403/9

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recognition of the object contours in the image as sequences of digital straight segments and/or digital curve arcs is considered in this article. The definitions of digital straight segments and of digital curve arcs are proposed. The methods and programs to recognize the object contours are represented. The algorithm to recognize the digital straight segments is formulated in terms of the growing pyramidal networks taking into account the conceptual model of memory and identification (Rabinovich [4]).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nanoindentation has become a common technique for measuring the hardness and elastic-plastic properties of materials, including coatings and thin films. In recent years, different nanoindenter instruments have been commercialised and used for this purpose. Each instrument is equipped with its own analysis software for the derivation of the hardness and reduced Young's modulus from the raw data. These data are mostly analysed through the Oliver and Pharr method. In all cases, the calibration of compliance and area function is mandatory. The present work illustrates and describes a calibration procedure and an approach to raw data analysis carried out for six different nanoindentation instruments through several round-robin experiments. Three different indenters were used, Berkovich, cube corner, spherical, and three standardised reference samples were chosen, hard fused quartz, soft polycarbonate, and sapphire. It was clearly shown that the use of these common procedures consistently limited the hardness and reduced the Young's modulus data spread compared to the same measurements performed using instrument-specific procedures. The following recommendations for nanoindentation calibration must be followed: (a) use only sharp indenters, (b) set an upper cut-off value for the penetration depth below which measurements must be considered unreliable, (c) perform nanoindentation measurements with limited thermal drift, (d) ensure that the load-displacement curves are as smooth as possible, (e) perform stiffness measurements specific to each instrument/indenter couple, (f) use Fq and Sa as calibration reference samples for stiffness and area function determination, (g) use a function, rather than a single value, for the stiffness and (h) adopt a unique protocol and software for raw data analysis in order to limit the data spread related to the instruments (i.e. the level of drift or noise, defects of a given probe) and to make the H and E r data intercomparable. © 2011 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

* Work is partially supported by the Lithuanian State Science and Studies Foundation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1mm for displacements parallel to the fluoroscopic plane, and of order of 10mm for the orthogonal displacement. © 2010 P. Bifulco et al.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 14H55; Secondary 14H30, 14H40, 20M14.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the proof of Lemma 3.1 in [1] we need to show that we may take the two points p and q with p ≠ q such that p+q+(b-2)g21(C′)∼2(q1+… +qb-1) where q1,…,qb-1 are points of C′, but in the paper [1] we did not show that p ≠ q. Moreover, we hadn't been able to prove this using the method of our paper [1]. So we must add some more assumption to Lemma 3.1 and rewrite the statements of our paper after Lemma 3.1. The following is the correct version of Lemma 3.1 in [1] with its proof.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 14H55; Secondary 14H30, 14J26.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Auditor decisions regarding the causes of accounting misstatements can have an audit effectiveness and efficiency. Specifically, overconfidence in one's decision can lead to an ineffective audit, whereas underconfidence in one's decision can lead to an inefficient audit. This dissertation explored the implications of providing various types of information cues to decision-makers regarding an Analytical Procedure task and investigated the relationship between different types of evidence cues (confirming, disconfirming, redundant or non-redundant) and the reduction in calibration bias. Information was collected using a laboratory experiment, from 45 accounting students participants. Research questions were analyzed using a 2 x 2 x 2 between-subject and within-subject analysis of covariance (ANCOVA). ^ Results indicated that presenting subjects with information cues dissimilar to the choice they made is an effective intervention in reducing the common overconfidence found in decision-making. In addition, other information characteristics, specifically non-redundant information can help in reducing a decision-maker's overconfidence/calibration bias for difficulty (compared to easy) decision-tasks. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current commercially available mimics contain varying amounts of either the actual explosive/drug or the chemical compound of suspected interest by biological detectors. As a result, there is significant interest in determining the dominant chemical odor signatures of the mimics, often referred to as pseudos, particularly when compared to the genuine contraband material. This dissertation discusses results obtained from the analysis of drug and explosive headspace related to the odor profiles as recognized by trained detection canines. Analysis was performed through the use of headspace solid phase microextraction in conjunction with gas chromatography mass spectrometry (HS-SPME-GC-MS). Upon determination of specific odors, field trials were held using a combination of the target odors with COMPS. Piperonal was shown to be a dominant odor compound in the headspace of some ecstasy samples and a recognizable odor mimic by trained detection canines. It was also shown that detection canines could be imprinted on piperonal COMPS and correctly identify ecstasy samples at a threshold level of approximately 100ng/s. Isosafrole and/or MDP-2-POH show potential as training aid mimics for non-piperonal based MDMA. Acetic acid was shown to be dominant in the headspace of heroin samples and verified as a dominant odor in commercial vinegar samples; however, no common, secondary compound was detected in the headspace of either. Because of the similarities detected within respective explosive classes, several compounds were chosen for explosive mimics. A single based smokeless powder with a detectable level of 2,4-dinitrotoluene, a double based smokeless powder with a detectable level of nitroglycerine, 2-ethyl-1-hexanol, DMNB, ethyl centralite and diphenylamine were shown to be accurate mimics for TNT-based explosives, NG-based explosives, plastic explosives, tagged explosives, and smokeless powders, respectively. The combination of these six odors represents a comprehensive explosive odor kit with positive results for imprint on detection canines. As a proof of concept, the chemical compound PFTBA showed promise as a possible universal, non-target odor compound for comparison and calibration of detection canines and instrumentation. In a comparison study of shape versus vibration odor theory, the detection of d-methyl benzoate and methyl benzoate was explored using canine detectors. While results did not overwhelmingly substantiate either theory, shape odor theory provides a better explanation of the canine and human subject responses.