948 resultados para Error threshold
Resumo:
In wheat, tillering and water-soluble carbohydrates (WSCs) in the stem are potential traits for adaptation to different environments and are of interest as targets for selective breeding. This study investigated the observation that a high stem WSC concentration (WSCc) is often related to low tillering. The proposition tested was that stem WSC accumulation is plant density dependent and could be an emergent property of tillering, whether driven by genotype or by environment. A small subset of recombinant inbred lines (RILs) contrasting for tillering was grown at different plant densities or on different sowing dates in multiple field experiments. Both tillering and WSCc were highly influenced by the environment, with a smaller, distinct genotypic component; the genotypeenvironment range covered 350750 stems m(2) and 25210mg g(1) WSCc. Stem WSCc was inversely related to stem number m(2), but genotypic rankings for stem WSCc persisted when RILs were compared at similar stem density. Low tilleringhigh WSCc RILs had similar leaf area index, larger individual leaves, and stems with larger internode cross-section and wall area when compared with high tilleringlow WSCc RILs. The maximum number of stems per plant was positively associated with growth and relative growth rate per plant, tillering rate and duration, and also, in some treatments, with leaf appearance rate and final leaf number. A common threshold of the red:far red ratio (0.390.44; standard error of the difference0.055) coincided with the maximum stem number per plant across genotypes and plant densities, and could be effectively used in crop simulation modelling as a ocut-off' rule for tillering. The relationship between tillering, WSCc, and their component traits, as well as the possible implications for crop simulation and breeding, is discussed.
Resumo:
Pion photoproduction processes14Ngs(gamma, pgr +)14C and14Ngs(gamma, pgr –)14O have been studied in the threshold region. These processes provide an excellent tool to study the corrections to soft pion theorems and Kroll-Ruderman limit as applied to nuclear processes. The agreement with the available experimental data for these processes is better with the empirical wave functions while the shell-model wave functions predict a much higher value. Detailed experimental studies of these reactions at threshold, it is shown, are expected to lead to a better understanding of the shell-model inputs and radial distributions in the 1p state. We thank Dr. S.C.K. Nair for a helpful discussion during the initial stages of this work. One of us (MVN) thanks Dr. J.M. Laget for sending some unpublished data on pion photoproduction. He is also thankful to Dr. J. Pasupathy and Dr. R. Rajaraman for their interest and encouragement.
Resumo:
A simple error detecting and correcting procedure is described for nonbinary symbol words; here, the error position is located using the Hamming method and the correct symbol is substituted using a modulo-check procedure.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
A unate function can easily be identified on a Karnaugh map from the well-known property that it cons ist s only ofess en ti al prime implicante which intersect at a common implicant. The additional property that the plot of a unate function F(x, ... XII) on a Karnaugh map should possess in order that F may also be Ivrealizable (n';:; 6) has been found. It has been sh own that the I- realizability of a unate function F corresponds to the ' compac tness' of the plot of F. No resort to tho inequalities is made, and no pre-processing such as positivizing and ordering of the given function is required.
Resumo:
To investigate the threshold level of defocus that induces a measurable objective change in accommodation response to a target at an intermediate distance.
Resumo:
The paper presents an innovative approach to modelling the causal relationships of human errors in rail crack incidents (RCI) from a managerial perspective. A Bayesian belief network is developed to model RCI by considering the human errors of designers, manufactures, operators and maintainers (DMOM) and the causal relationships involved. A set of dependent variables whose combinations express the relevant functions performed by each DMOM participant is used to model the causal relationships. A total of 14 RCI on Hong Kong’s mass transit railway (MTR) from 2008 to 2011 are used to illustrate the application of the model. Bayesian inference is used to conduct an importance analysis to assess the impact of the participants’ errors. Sensitivity analysis is then employed to gauge the effect the increased probability of occurrence of human errors on RCI. Finally, strategies for human error identification and mitigation of RCI are proposed. The identification of ability of maintainer in the case study as the most important factor influencing the probability of RCI implies the priority need to strengthen the maintenance management of the MTR system and that improving the inspection ability of the maintainer is likely to be an effective strategy for RCI risk mitigation.
Resumo:
The carrier type reversal (CTR) from p- to n-type in semiconducting chalcogenide glasses is an important and a long standing problem in glass science. Ge-Se glasses exhibit CTR when the metallic elements Bi and Pb are added. For example, bulk Ge42-xSe58Pbx glasses exhibit CTR around 8-9 at. % of Pb. These glasses have been prepared by melt quenching method. Glass transition temperature (T-g), Specific heat change between the liquid and the glassy states (Delta C-p) at T-g and the nonreversing heat flow (Delta H-nr) measured by modulated differential scanning calorimetry exhibit anomalies at 9 at. % of Pb. These observed anomalies are interpreted on the basis of the nano scale phase separation occurring in these glasses.
Resumo:
Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.
Resumo:
In this work, for the first time, we present a physically based analytical threshold voltage model for omega gate silicon nanowire transistor. This model is developed for long channel cylindrical body structure. The potential distribution at each and every point of the of the wire is derived with a closed form solution of two dimensional Poisson's equation, which is then used to model the threshold voltage. Proposed model can be treated as a generalized model, which is valid for both surround gate and semi-surround gate cylindrical transistors. The accuracy of proposed model is verified for different device geometry against the results obtained from three dimensional numerical device simulators and close agreement is observed.
Resumo:
A major drawback in using bulk metallic glasses (BMGs) as structural materials is their extremely poor fatigue performance. One way to alleviate this problem is through the composite route, in which second phases are introduced into the glass to arrest crack growth. In this paper, the fatigue crack growth behavior of in situ reinforced BMGs with crystalline dendrites, which are tailored to impart significant ductility and toughness to the BMG, was investigated. Three composites, all with equal volume fraction of dendrite phases, were examined to assess the influence of chemical composition on the near-threshold fatigue crack growth characteristics. While the ductility is enhanced at the cost of yield strength vis-a-vis that of the fully amorphous BMG, the threshold stress intensity factor range for fatigue crack initiation in composites was found to be enhanced by more than 100%. Crack blunting and trapping by the dendritic phases and constraining of the shear bands within the interdendritic regions are the micromechanisms responsible for this enhanced fatigue crack growth resistance.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.