939 resultados para Mean square error methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the potential application of mid-infrared spectroscopy (MIR 4,000–900 cm−1) for the determination of milk coagulation properties (MCP), titratable acidity (TA), and pH in Brown Swiss milk samples (n = 1,064). Because MCP directly influence the efficiency of the cheese-making process, there is strong industrial interest in developing a rapid method for their assessment. Currently, the determination of MCP involves time-consuming laboratory-based measurements, and it is not feasible to carry out these measurements on the large numbers of milk samples associated with milk recording programs. Mid-infrared spectroscopy is an objective and nondestructive technique providing rapid real-time analysis of food compositional and quality parameters. Analysis of milk rennet coagulation time (RCT, min), curd firmness (a30, mm), TA (SH°/50 mL; SH° = Soxhlet-Henkel degree), and pH was carried out, and MIR data were recorded over the spectral range of 4,000 to 900 cm−1. Models were developed by partial least squares regression using untreated and pretreated spectra. The MCP, TA, and pH prediction models were improved by using the combined spectral ranges of 1,600 to 900 cm−1, 3,040 to 1,700 cm−1, and 4,000 to 3,470 cm−1. The root mean square errors of cross-validation for the developed models were 2.36 min (RCT, range 24.9 min), 6.86 mm (a30, range 58 mm), 0.25 SH°/50 mL (TA, range 3.58 SH°/50 mL), and 0.07 (pH, range 1.15). The most successfully predicted attributes were TA, RCT, and pH. The model for the prediction of TA provided approximate prediction (R2 = 0.66), whereas the predictive models developed for RCT and pH could discriminate between high and low values (R2 = 0.59 to 0.62). It was concluded that, although the models require further development to improve their accuracy before their application in industry, MIR spectroscopy has potential application for the assessment of RCT, TA, and pH during routine milk analysis in the dairy industry. The implementation of such models could be a means of improving MCP through phenotypic-based selection programs and to amend milk payment systems to incorporate MCP into their payment criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To determine the prevalence and nature of prescribing errors in general practice; to explore the causes, and to identify defences against error. Methods: 1) Systematic reviews; 2) Retrospective review of unique medication items prescribed over a 12 month period to a 2% sample of patients from 15 general practices in England; 3) Interviews with 34 prescribers regarding 70 potential errors; 15 root cause analyses, and six focus groups involving 46 primary health care team members Results: The study involved examination of 6,048 unique prescription items for 1,777 patients. Prescribing or monitoring errors were detected for one in eight patients, involving around one in 20 of all prescription items. The vast majority of the errors were of mild to moderate severity, with one in 550 items being associated with a severe error. The following factors were associated with increased risk of prescribing or monitoring errors: male gender, age less than 15 years or greater than 64 years, number of unique medication items prescribed, and being prescribed preparations in the following therapeutic areas: cardiovascular, infections, malignant disease and immunosuppression, musculoskeletal, eye, ENT and skin. Prescribing or monitoring errors were not associated with the grade of GP or whether prescriptions were issued as acute or repeat items. A wide range of underlying causes of error were identified relating to the prescriber, patient, the team, the working environment, the task, the computer system and the primary/secondary care interface. Many defences against error were also identified, including strategies employed by individual prescribers and primary care teams, and making best use of health information technology. Conclusion: Prescribing errors in general practices are common, although severe errors are unusual. Many factors increase the risk of error. Strategies for reducing the prevalence of error should focus on GP training, continuing professional development for GPs, clinical governance, effective use of clinical computer systems, and improving safety systems within general practices and at the interface with secondary care.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interactions between shear-free turbulence in two regions (denoted as + and − on either side of a nearly flat horizontal interface are shown here to be controlled by several mechanisms, which depend on the magnitudes of the ratios of the densities, ρ+/ρ−, and kinematic viscosities of the fluids, μ+/μ−, and the root mean square (r.m.s.) velocities of the turbulence, u0+/u0−, above and below the interface. This study focuses on gas–liquid interfaces so that ρ+/ρ− ≪ 1 and also on where turbulence is generated either above or below the interface so that u0+/u0− is either very large or very small. It is assumed that vertical buoyancy forces across the interface are much larger than internal forces so that the interface is nearly flat, and coupling between turbulence on either side of the interface is determined by viscous stresses. A formal linearized rapid-distortion analysis with viscous effects is developed by extending the previous study by Hunt & Graham (J. Fluid Mech., vol. 84, 1978, pp. 209–235) of shear-free turbulence near rigid plane boundaries. The physical processes accounted for in our model include both the blocking effect of the interface on normal components of the turbulence and the viscous coupling of the horizontal field across thin interfacial viscous boundary layers. The horizontal divergence in the perturbation velocity field in the viscous layer drives weak inviscid irrotational velocity fluctuations outside the viscous boundary layers in a mechanism analogous to Ekman pumping. The analysis shows the following. (i) The blocking effects are similar to those near rigid boundaries on each side of the interface, but through the action of the thin viscous layers above and below the interface, the horizontal and vertical velocity components differ from those near a rigid surface and are correlated or anti-correlated respectively. (ii) Because of the growth of the viscous layers on either side of the interface, the ratio uI/u0, where uI is the r.m.s. of the interfacial velocity fluctuations and u0 the r.m.s. of the homogeneous turbulence far from the interface, does not vary with time. If the turbulence is driven in the lower layer with ρ+/ρ− ≪ 1 and u0+/u0− ≪ 1, then uI/u0− ~ 1 when Re (=u0−L−/ν−) ≫ 1 and R = (ρ−/ρ+)(v−/v+)1/2 ≫ 1. If the turbulence is driven in the upper layer with ρ+/ρ− ≪ 1 and u0+/u0− ≫ 1, then uI/u0+ ~ 1/(1 + R). (iii) Nonlinear effects become significant over periods greater than Lagrangian time scales. When turbulence is generated in the lower layer, and the Reynolds number is high enough, motions in the upper viscous layer are turbulent. The horizontal vorticity tends to decrease, and the vertical vorticity of the eddies dominates their asymptotic structure. When turbulence is generated in the upper layer, and the Reynolds number is less than about 106–107, the fluctuations in the viscous layer do not become turbulent. Nonlinear processes at the interface increase the ratio uI/u0+ for sheared or shear-free turbulence in the gas above its linear value of uI/u0+ ~ 1/(1 + R) to (ρ+/ρ−)1/2 ~ 1/30 for air–water interfaces. This estimate agrees with the direct numerical simulation results from Lombardi, De Angelis & Bannerjee (Phys. Fluids, vol. 8, no. 6, 1996, pp. 1643–1665). Because the linear viscous–inertial coupling mechanism is still significant, the eddy motions on either side of the interface have a similar horizontal structure, although their vertical structure differs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interference by siren background-noise with speech transmitted from radio equipment (3) of an emergency-service vehicle, is reduced by apparatus (1) that subtracts (43) an estimate nk of the correlated siren-noise component from the contaminated signal yk supplied by the cab-microphone (2). The estimate nk is computed by FIR (finite impulse response) filtering of a siren-reference signal xk supplied by a unit (4) from one or more microphones located on or near the siren, or from the electric waveform driving the siren. The filter-coefficients wk are adjusted according to an LMS (least mean square) adaptive algorithm that is applied to the correlated-noise component ek of the fed-back noise-reduced signal, so as to bring about iterative cancellation with close frequency-tracking, of the siren noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple, dynamically consistent model of mixing and transport in Rossby-wave critical layers is obtained from the well-known Stewartson–Warn–Warn (SWW) solution of Rossby-wave critical-layer theory. The SWW solution is thought to be a useful conceptual model of Rossby-wave breaking in the stratosphere. Chaotic advection in the model is a consequence of the interaction between a stationary and a transient Rossby wave. Mixing and transport are characterized separately with a number of quantitative diagnostics (e.g. mean-square dispersion, lobe dynamics, and spectral moments), and with particular emphasis on the dynamics of the tracer field itself. The parameter dependences of the diagnostics are examined: transport tends to increase monotonically with increasing perturbation amplitude whereas mixing does not. The robustness of the results is investigated by stochastically perturbing the transient-wave phase speed. The two-wave chaotic advection model is contrasted with a stochastic single-wave model. It is shown that the effects of chaotic advection cannot be captured by stochasticity alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Soil Moisture and Ocean Salinity (SMOS) satellite marks the commencement of dedicated global surface soil moisture missions, and the first mission to make passive microwave observations at L-band. On-orbit calibration is an essential part of the instrument calibration strategy, but on-board beam-filling targets are not practical for such large apertures. Therefore, areas to serve as vicarious calibration targets need to be identified. Such sites can only be identified through field experiments including both in situ and airborne measurements. For this purpose, two field experiments were performed in central Australia. Three areas are studied as follows: 1) Lake Eyre, a typically dry salt lake; 2) Wirrangula Hill, with sparse vegetation and a dense cover of surface rock; and 3) Simpson Desert, characterized by dry sand dunes. Of those sites, only Wirrangula Hill and the Simpson Desert are found to be potentially suitable targets, as they have a spatial variation in brightness temperatures of <4 K under normal conditions. However, some limitations are observed for the Simpson Desert, where a bias of 15 K in vertical and 20 K in horizontal polarization exists between model predictions and observations, suggesting a lack of understanding of the underlying physics in this environment. Subsequent comparison with model predictions indicates a SMOS bias of 5 K in vertical and 11 K in horizontal polarization, and an unbiased root mean square difference of 10 K in both polarizations for Wirrangula Hill. Most importantly, the SMOS observations show that the brightness temperature evolution is dominated by regular seasonal patterns and that precipitation events have only little impact.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a study of coronal mass ejections (CMEs) which impacted one of the STEREO spacecraft between January 2008 and early 2010. We focus our study on 20 CMEs which were observed remotely by the Heliospheric Imagers (HIs) onboard the other STEREO spacecraft up to large heliocentric distances. We compare the predictions of the Fixed-Φ and Harmonic Mean (HM) fitting methods, which only differ by the assumed geometry of the CME. It is possible to use these techniques to determine from remote-sensing observations the CME direction of propagation, arrival time and final speed which are compared to in-situ measurements. We find evidence that for large viewing angles, the HM fitting method predicts the CME direction better. However, this may be due to the fact that only wide CMEs can be successfully observed when the CME propagates more than 100∘ from the observing spacecraft. Overall eight CMEs, originating from behind the limb as seen by one of the STEREO spacecraft can be tracked and their arrival time at the other STEREO spacecraft can be successfully predicted. This includes CMEs, such as the events on 4 December 2009 and 9 April 2010, which were viewed 130∘ away from their direction of propagation. Therefore, we predict that some Earth-directed CMEs will be observed by the HIs until early 2013, when the separation between Earth and one of the STEREO spacecraft will be similar to the separation of the two STEREO spacecraft in 2009 – 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper forecasts Daily Sterling exchange rate returns using various naive, linear and non-linear univariate time-series models. The accuracy of the forecasts is evaluated using mean squared error and sign prediction criteria. These show only a very modest improvement over forecasts generated by a random walk model. The Pesaran–Timmerman test and a comparison with forecasts generated artificially shows that even the best models have no evidence of market timing ability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the orientational ordering on the surface of a sphere using Monte Carlo and Brownian dynamics simulations of rods interacting with an anisotropic potential. We restrict the orientations to the local tangent plane of the spherical surface and fix the position of each rod to be at a discrete point on the spherical surface. On the surface of a sphere, orientational ordering cannot be perfectly nematic due to the inevitable presence of defects. We find that the ground state of four +1/2 point defects is stable across a broad range of temperatures. We investigate the transition from disordered to ordered phase by decreasing the temperature and find a very smooth transition. We use fluctuations of the local directors to estimate the Frank elastic constant on the surface of a sphere and compare it to the planar case. We observe subdiffusive behavior in the mean square displacement of the defect cores and estimate their diffusion constants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time series of global and regional mean Surface Air Temperature (SAT) anomalies are a common metric used to estimate recent climate change. Various techniques can be used to create these time series from meteorological station data. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques relative to the reanalysis reference. Kriging techniques provided the smallest errors in estimates of Arctic anomalies and Simple Kriging was often the best kriging method in this study, especially over sea ice. A linear interpolation technique had, on average, Root Mean Square Errors (RMSEs) up to 0.55 K larger than the two kriging techniques tested. Non-interpolating techniques provided the least representative anomaly estimates. Nonetheless, they serve as useful checks for confirming whether estimates from interpolating techniques are reasonable. The interaction of meteorological station coverage with estimation techniques between 1850 and 2011 was simulated using an ensemble dataset comprising repeated individual years (1979-2011). All techniques were found to have larger RMSEs for earlier station coverages. This supports calls for increased data sharing and data rescue, especially in sparsely observed regions such as the Arctic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Windstorms are a main feature of the European climate and exert strong socioeconomic impacts. Large effort has been made in developing and enhancing models to simulate the intensification of windstorms, resulting footprints, and associated impacts. Simulated wind or gust speeds usually differ from observations, as regional climate models have biases and cannot capture all local effects. An approach to adjust regional climate model (RCM) simulations of wind and wind gust toward observations is introduced. For this purpose, 100 windstorms are selected and observations of 173 (111) test sites of the German Weather Service are considered for wind (gust) speed. Theoretical Weibull distributions are fitted to observed and simulated wind and gust speeds, and the distribution parameters of the observations are interpolated onto the RCM computational grid. A probability mapping approach is applied to relate the distributions and to correct the modeled footprints. The results are not only achieved for single test sites but for an area-wide regular grid. The approach is validated using root-mean-square errors on event and site basis, documenting that the method is generally able to adjust the RCM output toward observations. For gust speeds, an improvement on 88 of 100 events and at about 64% of the test sites is reached. For wind, 99 of 100 improved events and ~84% improved sites can be obtained. This gives confidence on the potential of the introduced approach for many applications, in particular those considering wind data.