862 resultados para Least Square Method
Resumo:
This study investigated the potential application of mid-infrared spectroscopy (MIR 4,000–900 cm−1) for the determination of milk coagulation properties (MCP), titratable acidity (TA), and pH in Brown Swiss milk samples (n = 1,064). Because MCP directly influence the efficiency of the cheese-making process, there is strong industrial interest in developing a rapid method for their assessment. Currently, the determination of MCP involves time-consuming laboratory-based measurements, and it is not feasible to carry out these measurements on the large numbers of milk samples associated with milk recording programs. Mid-infrared spectroscopy is an objective and nondestructive technique providing rapid real-time analysis of food compositional and quality parameters. Analysis of milk rennet coagulation time (RCT, min), curd firmness (a30, mm), TA (SH°/50 mL; SH° = Soxhlet-Henkel degree), and pH was carried out, and MIR data were recorded over the spectral range of 4,000 to 900 cm−1. Models were developed by partial least squares regression using untreated and pretreated spectra. The MCP, TA, and pH prediction models were improved by using the combined spectral ranges of 1,600 to 900 cm−1, 3,040 to 1,700 cm−1, and 4,000 to 3,470 cm−1. The root mean square errors of cross-validation for the developed models were 2.36 min (RCT, range 24.9 min), 6.86 mm (a30, range 58 mm), 0.25 SH°/50 mL (TA, range 3.58 SH°/50 mL), and 0.07 (pH, range 1.15). The most successfully predicted attributes were TA, RCT, and pH. The model for the prediction of TA provided approximate prediction (R2 = 0.66), whereas the predictive models developed for RCT and pH could discriminate between high and low values (R2 = 0.59 to 0.62). It was concluded that, although the models require further development to improve their accuracy before their application in industry, MIR spectroscopy has potential application for the assessment of RCT, TA, and pH during routine milk analysis in the dairy industry. The implementation of such models could be a means of improving MCP through phenotypic-based selection programs and to amend milk payment systems to incorporate MCP into their payment criteria.
Resumo:
We consider the linear equality-constrained least squares problem (LSE) of minimizing ${\|c - Gx\|}_2 $, subject to the constraint $Ex = p$. A preconditioned conjugate gradient method is applied to the Kuhn–Tucker equations associated with the LSE problem. We show that our method is well suited for structural optimization problems in reliability analysis and optimal design. Numerical tests are performed on an Alliant FX/8 multiprocessor and a Cray-X-MP using some practical structural analysis data.
Resumo:
Following a malicious or accidental atmospheric release in an outdoor environment it is essential for first responders to ensure safety by identifying areas where human life may be in danger. For this to happen quickly, reliable information is needed on the source strength and location, and the type of chemical agent released. We present here an inverse modelling technique that estimates the source strength and location of such a release, together with the uncertainty in those estimates, using a limited number of measurements of concentration from a network of chemical sensors considering a single, steady, ground-level source. The technique is evaluated using data from a set of dispersion experiments conducted in a meteorological wind tunnel, where simultaneous measurements of concentration time series were obtained in the plume from a ground-level point-source emission of a passive tracer. In particular, we analyze the response to the number of sensors deployed and their arrangement, and to sampling and model errors. We find that the inverse algorithm can generate acceptable estimates of the source characteristics with as few as four sensors, providing these are well-placed and that the sampling error is controlled. Configurations with at least three sensors in a profile across the plume were found to be superior to other arrangements examined. Analysis of the influence of sampling error due to the use of short averaging times showed that the uncertainty in the source estimates grew as the sampling time decreased. This demonstrated that averaging times greater than about 5min (full scale time) lead to acceptable accuracy.
Resumo:
Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.
Resumo:
In this paper we propose and analyze a hybrid $hp$ boundary element method for the solution of problems of high frequency acoustic scattering by sound-soft convex polygons, in which the approximation space is enriched with oscillatory basis functions which efficiently capture the high frequency asymptotics of the solution. We demonstrate, both theoretically and via numerical examples, exponential convergence with respect to the order of the polynomials, moreover providing rigorous error estimates for our approximations to the solution and to the far field pattern, in which the dependence on the frequency of all constants is explicit. Importantly, these estimates prove that, to achieve any desired accuracy in the computation of these quantities, it is sufficient to increase the number of degrees of freedom in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods.
Resumo:
The effects of forage conservation method on plasma lipids, mammary lipogenesis, and milk fat were examined in 2 complementary experiments. Treatments comprised fresh grass, hay, or untreated (UTS) or formic acid treated silage (FAS) prepared from the same grass sward. Preparation of conserved forages coincided with the collection of samples from cows fed fresh grass. In the first experiment, 5 multiparous Finnish Ayrshire cows (229 d in milk) were used to compare a diet based on fresh grass followed by hay during 2 consecutive 14-d periods, separated by a 5-d transition during which extensively wilted grass was fed. In the second experiment, 5 multiparous Finnish Ayrshire cows (53 d in milk) were assigned to 1 of 2 blocks and allocated treatments according to a replicated 3 × 3 Latin square design, with 14-d periods to compare hay, UTS, and FAS. Cows received 7 or 9 kg/d of the same concentrate in experiments 1 and 2, respectively. Arterial concentrations of triacylglycerol (TAG) and phospholipid were higher in cows fed fresh grass, UTS, and FAS compared with hay. Nonesterified fatty acid (NEFA) concentrations and the relative abundance of 18:2n-6 and 18:3n-3 in TAG of arterial blood were also higher in cows fed fresh grass than conserved forages. On all diets, TAG was the principle source of fatty acids (FA) for milk fat synthesis, whereas mammary extraction of NEFA was negligible, except during zero-grazing, which was associated with a lower, albeit positive calculated energy balance. Mammary FA uptake was higher and the synthesis of 16:0 lower in cows fed fresh grass than hay. Conservation of grass by drying or ensiling had no influence on mammary extraction of TAG and NEFA, despite an increase in milk fat secretion for silages compared with hay and for FAS than UTS. Relative to hay, milk fat from fresh grass contained lower 12:0, 14:0, and 16:0 and higher S3,R7,R11,15-tetramethyl-16:0, cis-9 18:1, trans-11 18:1, cis-9,trans-11 18:2, 18:2n-6, and 18:3n-3 concentrations. Even though conserved forages altered mammary lipogenesis, differences in milk FA composition were relatively minor, other than a higher enrichment of S3,R7,R11,15-tetramethyl-16:0 in milk from silages compared with hay. In conclusion, differences in milk fat composition on fresh grass relative to conserved forages were associated with a lower energy balance, increased uptake of preformed FA, and decreased synthesis of 16:0 de novo in the mammary glands, in the absence of alterations in stearoyl-coenzyme A desaturase activity.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
The invention provides antisense antiviral compounds and methods of their use and production in inhibition of growth of viruses of the Arenaviridae family and in the treatment of a viral infection. The compounds are particularly useful in the treatment of Arenavirus infection in a mammal. The antisense antiviral compounds are substantially uncharged morpholino oligonucleotides have a sequence of 12-40 subunits, including at least 12 subunits having a targeting sequence that is complementary to a region associated with viral RNA sequences within a 19 nucleotide region of the 5′-terminal regions of the viral RNA, viral complementary RNA and/or mRNA identified by SEQ ID NO:1.
Resumo:
In this paper we propose and analyse a hybrid numerical-asymptotic boundary element method for the solution of problems of high frequency acoustic scattering by a class of sound-soft nonconvex polygons. The approximation space is enriched with carefully chosen oscillatory basis functions; these are selected via a study of the high frequency asymptotic behaviour of the solution. We demonstrate via a rigorous error analysis, supported by numerical examples, that to achieve any desired accuracy it is sufficient for the number of degrees of freedom to grow only in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods. This appears to be the first such numerical analysis result for any problem of scattering by a nonconvex obstacle. Our analysis is based on new frequency-explicit bounds on the normal derivative of the solution on the boundary and on its analytic continuation into the complex plane.
Resumo:
Time series of global and regional mean Surface Air Temperature (SAT) anomalies are a common metric used to estimate recent climate change. Various techniques can be used to create these time series from meteorological station data. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques relative to the reanalysis reference. Kriging techniques provided the smallest errors in estimates of Arctic anomalies and Simple Kriging was often the best kriging method in this study, especially over sea ice. A linear interpolation technique had, on average, Root Mean Square Errors (RMSEs) up to 0.55 K larger than the two kriging techniques tested. Non-interpolating techniques provided the least representative anomaly estimates. Nonetheless, they serve as useful checks for confirming whether estimates from interpolating techniques are reasonable. The interaction of meteorological station coverage with estimation techniques between 1850 and 2011 was simulated using an ensemble dataset comprising repeated individual years (1979-2011). All techniques were found to have larger RMSEs for earlier station coverages. This supports calls for increased data sharing and data rescue, especially in sparsely observed regions such as the Arctic.
Resumo:
One of the prerequisites for achieving skill in decadal climate prediction is to initialize and predict the circulation in the Atlantic Ocean successfully. The RAPID array measures the Atlantic Meridional Overturning Circulation (MOC) at 26°N. Here we develop a method to include these observations in the Met Office Decadal Prediction System (DePreSys). The proposed method uses covariances of overturning transport anomalies at 26°N with ocean temperature and salinity anomalies throughout the ocean to create the density structure necessary to reproduce the observed transport anomaly. Assimilating transport alone in this way effectively reproduces the observed transport anomalies at 26°N and is better than using basin-wide temperature and salinity observations alone. However, when the transport observations are combined with in situ temperature and salinity observations in the analysis, the transport is not currently reproduced so well. The reasons for this are investigated using pseudo-observations in a twin experiment framework. Sensitivity experiments show that the MOC on monthly time-scales, at least in the HadCM3 model, is modulated by a mechanism where non-local density anomalies appear to be more important for transport variability at 26°N than local density gradients.
Resumo:
Background: The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. New method: We propose a complete pipeline for the cluster analysis of ERP data. To increase the signalto-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA)to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). Results: After validating the pipeline on simulated data, we tested it on data from two experiments – a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership.
Resumo:
Immunodiagnostic microneedles provide a novel way to extract protein biomarkers from the skin in a minimally invasive manner for analysis in vitro. The technology could overcome challenges in biomarker analysis specifically in solid tissue, which currently often involves invasive biopsies. This study describes the development of a multiplex immunodiagnostic device incorporating mechanisms to detect multiple antigens simultaneously, as well as internal assay controls for result validation. A novel detection method is also proposed. It enables signal detection specifically at microneedle tips and therefore may aid the construction of depth profiles of skin biomarkers. The detection method can be coupled with computerised densitometry for signal quantitation. The antigen specificity, sensitivity and functional stability of the device were assessed against a number of model biomarkers. Detection and analysis of endogenous antigens (interleukins 1α and 6) from the skin using the device was demonstrated. The results were verified using conventional enzyme-linked immunosorbent assays. The detection limit of the microneedle device, at ≤10 pg/mL, was at least comparable to conventional plate-based solid-phase enzyme immunoassays.
Resumo:
The present invention relates to vertebrate pesticide compositions for use in controlling pests such as rats and mice. The active substances in the vertebrate pesticide compositions comprise at least two components: a high concentration of low-toxicity anticoagulant and a low concentration of high-toxicity anticoagulant. The vertebrate pesticide compositions may also comprise various other components.