81 resultados para Spectral Line Broadening (Slb) Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tensile strength of 576 pieces of white line horn collected over 6 mo from 14 dairy cows restricted to parity 1 or 2 was tested. None of the cows had ever been lame. Seven cows were randomly assigned to receive 20 mg/d biotin supplementation, and 7 were not supplemented. Hoof horn samples were taken from zones 2 and 3 (the more proximal and distal sites of the abaxial white line) of the medial and lateral claws of both hind feet on d 1 and on 5 further occasions over 6 mo. The samples were analyzed at 100% water saturation. Hoof slivers were notched to ensure that tensile strength was measured specifically across the white line region. The tensile stress at failure was measured in MPa and was adjusted for the cross-sectional area of the notch site. Data were analyzed in a multilevel model, which accounted for the repeated measures within cows. All other variables were entered as fixed effects. In the final model, there was considerable variation in strength over time. Tensile strength was significantly higher in medial compared with lateral claws, and zone 2 was significantly stronger than zone 3. Where the white line was visibly damaged the tensile strength was low. Biotin supplementation did not affect the tensile strength of the white line. Results of this study indicate that damage to the white line impairs its tensile strength and that in horn with no visible abnormality the white line is weaker in the lateral hind claw than the medial and in zone 3 compared with zone 2. The biomechanical strength was lowest at zone 3 of the lateral hind claw, which is the most common site of white line disease lameness in cattle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time/frequency and temporal analyses have been widely used in biomedical signal processing. These methods represent important characteristics of a signal in both time and frequency domain. In this way, essential features of the signal can be viewed and analysed in order to understand or model the physiological system. Historically, Fourier spectral analyses have provided a general method for examining the global energy/frequency distributions. However, an assumption inherent to these methods is the stationarity of the signal. As a result, Fourier methods are not generally an appropriate approach in the investigation of signals with transient components. This work presents the application of a new signal processing technique, empirical mode decomposition and the Hilbert spectrum, in the analysis of electromyographic signals. The results show that this method may provide not only an increase in the spectral resolution but also an insight into the underlying process of the muscle contraction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We model the large scale fading of wireless THz communications links deployed in a metropolitan area taking into account reception through direct line of sight, ground or wall reflection and diffraction. The movement of the receiver in the three dimensions is modelled by an autonomous dynamic linear system in state-space whereas the geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a Wiener model from time-domain measurements of the field intensity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present an on-line estimation algorithm for an uncertain time delay in a continuous system based on the observational input-output data, subject to observational noise. The first order Pade approximation is used to approximate the time delay. At each time step, the algorithm combines the well known Kalman filter algorithm and the recursive instrumental variable least squares (RIVLS) algorithm in cascade form. The instrumental variable least squares algorithm is used in order to achieve the consistency of the delay parameter estimate, since an error-in-the-variable model is involved. An illustrative example is utilized to demonstrate the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of tracking line segments corresponding to on-line handwritten obtained through a digitizer tablet. The approach is based on Kalman filtering to model linear portions of on-line handwritten, particularly, handwritten numerals, and to detect abrupt changes in handwritten direction underlying a model change. This approach uses a Kalman filter framework constrained by a normalized line equation, where quadratic terms are linearized through a first-order Taylor expansion. The modeling is then carried out under the assumption that the state is deterministic and time-invariant, while the detection relies on double thresholding mechanism which tests for a violation of this assumption. The first threshold is based on an approach of layout kinetics. The second one takes into account the jump in angle between the past observed direction of layout and its current direction. The method proposed enables real-time processing. To illustrate the methodology proposed, some results obtained from handwritten numerals are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential of a fibre optic sensor, detecting light backscatter in a cheese vat during coagulation and syneresis, to predict curd moisture, fat loses and curd yield was examined. Temperature, cutting time and calcium levels were varied to assess the strength of the predictions over a range of processing conditions. Equations were developed using a combination of independent variables, milk compositional and light backscatter parameters. Fat losses, curd yield and curd moisture content were predicted with a standard error of prediction (SEP) of +/- 2.65 g 100 g(-1) (R-2 = 0.93), +/- 0.95% (R-2 = 0.90) and +/- 1.43% (R-2 = 0.94), respectively. These results were used to develop a model for predicting curd moisture as a function of time during syneresis (SEP = +/- 1.72%; R-2 = 0.95). By monitoring coagulation and syneresis, this sensor technology could be employed to control curd moisture content, thereby improving process control during cheese manufacture. (c) 2007 Elsevier Ltd. All rights reserved..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been considerable interest in the climate impact of trends in stratospheric water vapor (SWV). However, the representation of the radiative properties of water vapor under stratospheric conditions remains poorly constrained across different radiation codes. This study examines the sensitivity of a detailed line-by-line (LBL) code, a Malkmus narrow-band model and two broadband GCM radiation codes to a uniform perturbation in SWV in the longwave spectral region. The choice of sampling rate in wave number space (Δν) in the LBL code is shown to be important for calculations of the instantaneous change in heating rate (ΔQ) and the instantaneous longwave radiative forcing (ΔFtrop). ΔQ varies by up to 50% for values of Δν spanning 5 orders of magnitude, and ΔFtrop varies by up to 10%. In the three less detailed codes, ΔQ differs by up to 45% at 100 hPa and 50% at 1 hPa compared to a LBL calculation. This causes differences of up to 70% in the equilibrium fixed dynamical heating temperature change due to the SWV perturbation. The stratosphere-adjusted radiative forcing differs by up to 96% across the less detailed codes. The results highlight an important source of uncertainty in quantifying and modeling the links between SWV trends and climate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solar-pointing Fourier transform infrared (FTIR) spectroscopy offers the capability to measure both the fine scale and broadband spectral structure of atmospheric transmission simultaneously across wide spectral regions. It is therefore suited to the study of both water vapour monomer and continuum absorption behaviours. However, in order to properly address this issue, it is necessary to radiatively calibrate the FTIR instrument response. A solar-pointing high-resolution FTIR spectrometer was deployed as part of the ‘Continuum Absorption by Visible and Infrared radiation and its Atmospheric Relevance’ (CAVIAR) consortium project. This paper describes the radiative calibration process using an ultra-high-temperature blackbody and the consideration of the related influence factors. The result is a radiatively calibrated measurement of the solar irradiation at the ground across the IR region from 2000 to 10 000 cm−1 with an uncertainty of between 3.3 and 5.9 per cent. This measurement is shown to be in good general agreement with a radiative-transfer model. The results from the CAVIAR field measurements are being used in ongoing studies of atmospheric absorbers, in particular the water vapour continuum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three new Mn(III) complexes [MnL1(OOCH)(OH2)] (1), [MnL2(OH2)(2)][Mn2L22(NO2)(3)] (2) and [Mn2L21(NO2)(2)] (3) (where H2L1 = H(2)Me(2)Salen = 2,7-bis(2-hydroxyphenyl)-2,6-diazaocta-2,6-diene and H2L2 = H(2)Salpn = 1,7-bis(2-hydroxyphenyl)-2,6-diazahepta-1,6-diene) have been synthesized. X-ray crystal structure analysis reveals that 1 is a mononuclear species whereas 2 contains a mononuclear cationic and a dinuclear nitrite bridged (mu-1 kappa O:2 kappa O') anionic unit. Complex 3 is a phenoxido bridged dimer containing terminally coordinated nitrite. Complexes 1-3 show excellent catecholase-like activity with 3,5-di-tert-butylcatechol (3,5-DTBC) as the substrate. Kinetic measurements suggest that the rate of catechol oxidation follows saturation kinetics with respect to the substrate and first order kinetics with respect to the catalyst. Formation of bis(mu-oxo)dimanganese(III,III) as an intermediate during the course of reaction is identified from ESI-MS spectra. The characteristic six line EPR spectra of complex 2 in the presence of 3,5-DTBC supports the formation of manganese(II)-semiquinonate as an intermediate species during the catalytic oxidation of 3,5-DTBC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the tube theory is successful in describing entangled polymers qualitatively, a more quantitative description requires precise and consistent definitions of its parameters. Here we investigate the simplest model of entangled polymers, namely a single Rouse chain in a cubic lattice of line obstacles, and illustrate the typical problems and uncertainties of the tube theory. In particular we show that in general one needs 3 entanglement related parameters, but only 2 combinations of them are relevant for the long-time dynamics. Conversely, the plateau modulus can not be determined from these two parameters and requires a more detailed model of entanglements with explicit entanglement forces, such as the slipsprings model. It is shown that for the grid model the Rouse time within the tube is larger than the Rouse time of the free chain, in contrast to what the standard tube theory assumes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The surface mass balance for Greenland and Antarctica has been calculated using model data from an AMIP-type experiment for the period 1979–2001 using the ECHAM5 spectral transform model at different triangular truncations. There is a significant reduction in the calculated ablation for the highest model resolution, T319 with an equivalent grid distance of ca 40 km. As a consequence the T319 model has a positive surface mass balance for both ice sheets during the period. For Greenland, the models at lower resolution, T106 and T63, on the other hand, have a much stronger ablation leading to a negative surface mass balance. Calculations have also been undertaken for a climate change experiment using the IPCC scenario A1B, with a T213 resolution (corresponding to a grid distance of some 60 km) and comparing two 30-year periods from the end of the twentieth century and the end of the twenty-first century, respectively. For Greenland there is change of 495 km3/year, going from a positive to a negative surface mass balance corresponding to a sea level rise of 1.4 mm/year. For Antarctica there is an increase in the positive surface mass balance of 285 km3/year corresponding to a sea level fall by 0.8 mm/year. The surface mass balance changes of the two ice sheets lead to a sea level rise of 7 cm at the end of this century compared to end of the twentieth century. Other possible mass losses such as due to changes in the calving of icebergs are not considered. It appears that such changes must increase significantly, and several times more than the surface mass balance changes, if the ice sheets are to make a major contribution to sea level rise this century. The model calculations indicate large inter-annual variations in all relevant parameters making it impossible to identify robust trends from the examined periods at the end of the twentieth century. The calculated inter-annual variations are similar in magnitude to observations. The 30-year trend in SMB at the end of the twenty-first century is significant. The increase in precipitation on the ice sheets follows closely the Clausius-Clapeyron relation and is the main reason for the increase in the surface mass balance of Antarctica. On Greenland precipitation in the form of snow is gradually starting to decrease and cannot compensate for the increase in ablation. Another factor is the proportionally higher temperature increase on Greenland leading to a larger ablation. It follows that a modest increase in temperature will not be sufficient to compensate for the increase in accumulation, but this will change when temperature increases go beyond any critical limit. Calculations show that such a limit for Greenland might well be passed during this century. For Antarctica this will take much longer and probably well into following centuries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an image motion model for airborne three-line-array (TLA) push-broom cameras. Both aircraft velocity and attitude instability are taken into account in modeling image motion. Effects of aircraft pitch, roll, and yaw on image motion are analyzed based on geometric relations in designated coordinate systems. The image motion is mathematically modeled by image motion velocity multiplied by exposure time. Quantitative analysis to image motion velocity is then conducted in simulation experiments. The results have shown that image motion caused by aircraft velocity is space invariant while image motion caused by aircraft attitude instability is more complicated. Pitch,roll and yaw all contribute to image motion to different extents. Pitch dominates the along-track image motion and both roll and yaw greatly contribute to the cross-track image motion. These results provide a valuable base for image motion compensation to ensure high accuracy imagery in aerial photogrammetry.