40 resultados para Error in essence


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A convenient method for measuring the clean colour (Y and Y-Z) and photostability Δ(Y-Z) of small samples of fleece wool (0.5 g) is described. Scoured wool samples are compressed to a constant density in disposable polymethyl methacrylate spectrophotometer cells and the wool colour is measured using a standard textile laboratory reflectance spectrophotometer. Packing scoured wool into cells ensures that the irradiated fibre surface is robust and individual fibres are unable to move relative to one another during irradiation and measurement. A UVB (280–320 nm) source was used to ensure all samples regardless of initial yellowness were yellowed following exposure and photobleaching was avoided. An apparatus capable of irradiating up to 48 scoured wool samples in one batch is described. The precision of photostability measurements was assessed and the relative error in Δ(Y-Z) was 5.7%. An initial study on 75 fleece wool samples with a high range of initial yellowness showed a moderate linear correlation (R2 = 0.68) between initial yellowness and Δ(Y-Z).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In audio watermarking, the robustness against pitch-scaling attack, is one of the most challenging problems. In this paper, we propose an algorithm, based on traditional time-spread(TS) echo hiding based audio watermarking to solve this problem. In TS echo hiding based watermarking, pitch-scaling attack shifts the location of pseudonoise (PN) sequence which appears in the cepstrum domain. Thus, position of the peak, which occurs after correlating with PN-sequence changes by an un-known amount and that causes the error. In the proposed scheme, we replace PN-sequence with unit-sample sequence and modify the decoding algorithm in such a way it will not depend on a particular point in cepstrum domain for extraction of watermark. Moreover proposed algorithm is applied to stereo audio signals to further improve the robustness. Experimental results illustrate the effectiveness of the proposed algorithm against pitch-scaling attacks compared to existing methods. In addition to that proposed algorithm also gives better robustness against other conventional signal processing attacks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compressed sensing (CS) is a new information sampling theory for acquiring sparse or compressible data with much fewer measurements than those otherwise required by the Nyquist/Shannon counterpart. This is particularly important for some imaging applications such as magnetic resonance imaging or in astronomy. However, in the existing CS formulation, the use of the â„“ 2 norm on the residuals is not particularly efficient when the noise is impulsive. This could lead to an increase in the upper bound of the recovery error. To address this problem, we consider a robust formulation for CS to suppress outliers in the residuals. We propose an iterative algorithm for solving the robust CS problem that exploits the power of existing CS solvers. We also show that the upper bound on the recovery error in the case of non-Gaussian noise is reduced and then demonstrate the efficacy of the method through numerical studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This exhibition followed an artist in residency at Marsden College, Wellington. During this residency, she engaged in a project of creating paintings from memories and imagination rather than plein-air or other reference material. The paintings and drawings became layered images from different sources culminating in essence images about her identity and experience with the Place that is Wellington.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents an account of the role of Tim O'Reilly, both as an individual and as a corporate entity (O'Reilly Group), in the creation, spread and use of the concept of Web 2.0. It demonstrates that, whatever Web 2.0's current uses to describe variously the technologies, politics, commerce or social meaning of the Internet, it originates as a deliberately open signifier of novel and potential internet development in the mid-2000s. The article argues that O'Reilly has promoted the diversity of the term's meanings and uses - celebrating textual liberties - but has also emphasised the special role that O'Reilly plays in providing the authoritative definition of that term. In essence, O'Reilly profits from this 'control' of the idea of Web 2.0 but that, to enjoy that control O'Reilly must also allow differences in meaning. The article concludes by suggesting that Web 2.0 therefore signifies a new kind of economics that brings together freedom and control in a new way.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is currently no universally recommended and accepted method of data processing within the science of indirect calorimetry for either mixing chamber or breath-by-breath systems of expired gas analysis. Exercise physiologists were first surveyed to determine methods used to process oxygen consumption ([OV0312]O 2) data, and current attitudes to data processing within the science of indirect calorimetry. Breath-by-breath datasets obtained from indirect calorimetry during incremental exercise were then used to demonstrate the consequences of commonly used time, breath and digital filter post-acquisition data processing strategies. Assessment of the variability in breath-by-breath data was determined using multiple regression based on the independent variables ventilation (VE), and the expired gas fractions for oxygen and carbon dioxide, FEO 2 and FECO2, respectively. Based on the results of explanation of variance of the breath-by-breath [OV0312]O2 data, methods of processing to remove variability were proposed for time-averaged, breath-averaged and digital filter applications. Among exercise physiologists, the strategy used to remove the variability in sequential [OV0312]O2 measurements varied widely, and consisted of time averages (30 sec [38%], 60 sec [18%], 20 sec [11%], 15 sec [8%]), a moving average of five to 11 breaths (10%), and the middle five of seven breaths (7%). Most respondents indicated that they used multiple criteria to establish maximum [OV0312]O 2 ([OV0312]O2max) including: the attainment of age-predicted maximum heart rate (HRmax) [53%], respiratory exchange ratio (RER) >1.10 (49%) or RER >1.15 (27%) and a rating of perceived exertion (RPE) of >17, 18 or 19 (20%). The reasons stated for these strategies included their own beliefs (32%), what they were taught (26%), what they read in research articles (22%), tradition (13%) and the influence of their colleagues (7%). The combination of VE, FEO 2 and FECO2 removed 96-98% of [OV0312]O2 breath-by-breath variability in incremental and steady-state exercise [OV0312]O2 data sets, respectively. Correction of residual error in [OV0312]O2 datasets to 10% of the raw variability results from application of a 30-second time average, 15-breath running average, or a 0.04 Hz low cut-off digital filter. Thus, we recommend that once these data processing strategies are used, the peak or maximal value becomes the highest processed datapoint. Exercise physiologists need to agree on, and continually refine through empirical research, a consistent process for analysing data from indirect calorimetry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Precise and reliable modelling of polymerization reactor is challenging due to its complex reaction mechanism and non-linear nature. Researchers often make several assumptions when deriving theories and developing models for polymerization reactor. Therefore, traditional available models suffer from high prediction error. In contrast, data-driven modelling techniques provide a powerful framework to describe the dynamic behaviour of polymerization reactor. However, the traditional NN prediction performance is significantly dropped in the presence of polymerization process disturbances. Besides, uncertainty effects caused by disturbances present in reactor operation can be properly quantified through construction of prediction intervals (PIs) for model outputs. In this study, we propose and apply a PI-based neural network (PI-NN) model for the free radical polymerization system. This strategy avoids assumptions made in traditional modelling techniques for polymerization reactor system. Lower upper bound estimation (LUBE) method is used to develop PI-NN model for uncertainty quantification. To further improve the quality of model, a new method is proposed for aggregation of upper and lower bounds of PIs obtained from individual PI-NN models. Simulation results reveal that combined PI-NN performance is superior to those individual PI-NN models in terms of PI quality. Besides, constructed PIs are able to properly quantify effects of uncertainties in reactor operation, where these can be later used as part of the control process. © 2014 Taiwan Institute of Chemical Engineers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

 Some illustrative examples are provided to identify the ineffective and unrealistic characteristics of existing approaches to solving fuzzy linear programming (FLP) problems (with single or multiple objectives). We point out the error in existing methods concerning the ranking of fuzzy numbers and thence suggest an effective method to solve the FLP. Based on the consistent centroid-based ranking of fuzzy numbers, the FLP problems are transformed into non-fuzzy single (or multiple) objective linear programming. Solutions of FLP are then crisp single or multiple objective programming problems, which can respectively be obtained by conventional methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heterogeneous deformation developed during "static recrystallization (SRX) tests" poses serious questions about the validity of the conventional methods to measure softening fraction. The challenges to measure SRX and verify a proposed kinetic model of SRX are discussed and a least square technique is utilized to quantify the error in a proposed SRX kinetic model. This technique relies on an existing computational-experimental multi-layer formulation to account for the heterogeneity during the post interruption hot torsion deformation. The kinetics of static recrystallization for a type 304 austenitic stainless steel deformed at 900 °C and strain rate of 0.01s-1 is characterized implementing the formulation. Minimizing the error between the measured and calculated torque-twist data, the parameters of the kinetic model and the flow behavior during the second hit are evaluated and compared with those obtained based on a conventional technique. Typical static recrystallization distributions in the test sample will be presented. It has been found that the major differences between the conventional and the presented technique results are due to the heterogeneous recrystallization in the cylindrical core of the specimen where the material is still partially recrystallized at the onset of the second hit deformation. For the investigated experimental conditions, the core is confined in the first two-thirds of the gauge radius, when the holding time is shorter than 50 s and the maximum pre-strain is about 0.5.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An analytic solution to the multi-target Bayes recursion known as the δ-Generalized Labeled Multi-Bernoulli ( δ-GLMB) filter has been recently proposed by Vo and Vo in [“Labeled Random Finite Sets and Multi-Object Conjugate Priors,” IEEE Trans. Signal Process., vol. 61, no. 13, pp. 3460-3475, 2014]. As a sequel to that paper, the present paper details efficient implementations of the δ-GLMB multi-target tracking filter. Each iteration of this filter involves an update operation and a prediction operation, both of which result in weighted sums of multi-target exponentials with intractably large number of terms. To truncate these sums, the ranked assignment and K-th shortest path algorithms are used in the update and prediction, respectively, to determine the most significant terms without exhaustively computing all of the terms. In addition, using tools derived from the same framework, such as probability hypothesis density filtering, we present inexpensive (relative to the δ-GLMB filter) look-ahead strategies to reduce the number of computations. Characterization of the L1-error in the multi-target density arising from the truncation is presented.