106 resultados para sparse linear systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is a commentary on several research studies conducted on the prospects for aerobic rice production systems that aim at reducing the demand for irrigation water which in certain major rice producing areas of the world is becoming increasingly scarce. The research studies considered, as reported in published articles mainly under the aegis of the International Rice Research Institute (IRRI), have a narrow scope in that they test only 3 or 4 rice varieties under different soil moisture treatments obtained with controlled irrigation, but with other agronomic factors of production held as constant. Consequently, these studies do not permit an assessment of the interactions among agronomic factors that will be of critical significance to the performance of any production system. Varying the production factor of "water" will seriously affect also the levels of the other factors required to optimise the performance of a production system. The major weakness in the studies analysed in this article originates from not taking account of the interactions between experimental and non-experimental factors involved in the comparisons between different production systems. This applies to the experimental field design used for the research studies as well as to the subsequent statistical analyses of the results. The existence of such interactions is a serious complicating element that makes meaningful comparisons between different crop production systems difficult. Consequently, the data and conclusions drawn from such research readily become biased towards proposing standardised solutions for possible introduction to farmers through a linear technology transfer process. Yet, the variability and diversity encountered in the real-world farming environment demand more flexible solutions and approaches in the dissemination of knowledge-intensive production practices through "experiential learning" types of processes, such as those employed by farmer field schools. This article illustrates, based on expertise of the 'system of rice intensification' (SRI), that several cost-effective and environment-friendly agronomic solutions to reduce the demand for irrigation water, other than the asserted need for the introduction of new cultivars, are feasible. Further, these agronomic Solutions can offer immediate benefits of reduced water requirements and increased net returns that Would be readily accessible to a wide range of rice producers, particularly the resource poor smallholders. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study sets out to find the best calving pattern for small-scale dairy systems in Michoacan State, central Mexico. Two models were built. First, a linear programming model was constructed to optimize calving pattern and herd structure according to metabolizable energy availability. Second, a Markov chain model was built to investigate three reproductive scenarios (good, average and poor) in order to suggest factors that maintain the calving pattern given by the linear programming model. Though it was not possible to maintain the optimal linear programming pattern, the Markov chain model suggested adopting different reproduction strategies according to period of the year that the cow is expected to calve. Comparing different scenarios, the Markov model indicated the effect of calving interval on calving pattern and herd structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix isolation IR spectroscopy has been used to study the vacuum pyrolysis of 1,1,3,3-tetramethyldisiloxane (L1), 1,1,3,3,5,5-hexamethyltrisiloxane (L2) and 3H,5H-octamethyltetrasiloxane (L3) at ca. 1000 K in a flow reactor at low pressures. The hydrocarbons CH3, CH4, C2H2, C2H4, and C2H6 were observed as prominent pyrolysis products in all three systems, and amongst the weaker features are bands arising from the methylsilanes Me2SiH2 (for L1 and L2) and Me3SiH (for L3). The fundamental of SiO was also observed very weakly. By use of quantum chemical calculations combined with earlier kinetic models, mechanisms have been proposed involving the intermediacy of silanones Me2Si = O and MeSiH = O. Model calculations on the decomposition pathways of H3SiOSiH3 and H3SiOSiH2OSiH3 show that silanone elimination is favoured over silylene extrusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A homologous series of macrocyclic oligoamides has been prepared in high yield by reaction of isophthaloyl chloride with m-phenylenediamine under pseudo-high-dilution conditions. The products were characterized by infrared and H-1 NMR spectroscopies, matrix assisted laser desorption-ionization time-of-flight mass spectrometry, and gel permeation chromatography (GPC). A series of linear oligomers was prepared for comparison. The macrocycles ranged in size from the cyclic trimer up to at least the cyclic nonamer (90 ring atoms). The same homologous series of macrocyclic oligomers was prepared in high yield by the cyclodepolymerization of poly(m-phenylene isophthalamide) (Nomex). Cyclodepolymerization was best achieved by treating a 1% w/v solution of the polymer in dimethyl sulfoxide containing calcium chloride or lithium chloride with 3-4 mol % of sodium hydride or the sodium salt of benzanilide at 150 degreesC for 70 h. Treatment of a concentrated solution of the macrocyclic oligomers (25% w/v) with 4 mol % of sodium hydride or the sodium salt of benzanilide in a solution of lithium chloride in dimethyl sulfoxide at 170 degreesC for 6 h resulted in efficient entropically driven ring-opening polymerizations to give poly(m-phenylene isophthalamide), characterized by infrared and H-1 NMR spectroscopies and by GPC. The molecular weights obtained were comparable with those of the commercial polymer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between acrylamide and its precursors, namely, free asparagine and reducing sugars, was studied in cakes made from potato flake, wholemeal wheat, and wholemeal rye, cooked at 180 degreesC, from 5 to 60 min. Between 5 and 20 min, major losses of asparagine, water, and total reducing sugars were accompanied by large increases in acrylamide, which maximized in all three products between 25 and 30 min, followed by a slow linear reduction. Acrylamide formation did not occur to a large degree until the moisture contents of the cakes fell below 5%. Linear relationships were observed for acrylamide formation with the residual levels of asparagine and reducing sugars for all three food materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness, including three algorithms using combined A- or D-optimality or PRESS statistic (Predicted REsidual Sum of Squares) with regularised orthogonal least squares algorithm respectively. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalisation scheme in orthogonal least squares or regularised orthogonal least squares has been extended such that the new algorithms are computationally efficient. A numerical example is included to demonstrate effectiveness of the algorithms. Copyright (C) 2003 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper identifies the major challenges in the area of pattern formation. The work is also motivated by the need for development of a single framework to surmount these challenges. A framework based on the control of macroscopic parameters is proposed. The issue of transformation of patterns is specifically considered. A definition for transformation and four special cases, namely elementary and geometrical transformations by repositioning all or some robots in the pattern are provided. Two feasible tools for pattern transformation namely, a macroscopic parameter method and a mathematical tool - Moebius transformation also known as the linear fractional transformation are introduced. The realization of the unifying framework considering planning and communication is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA., Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance or the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we deal with performance analysis of Monte Carlo algorithm for large linear algebra problems. We consider applicability and efficiency of the Markov chain Monte Carlo for large problems, i.e., problems involving matrices with a number of non-zero elements ranging between one million and one billion. We are concentrating on analysis of the almost Optimal Monte Carlo (MAO) algorithm for evaluating bilinear forms of matrix powers since they form the so-called Krylov subspaces. Results are presented comparing the performance of the Robust and Non-robust Monte Carlo algorithms. The algorithms are tested on large dense matrices as well as on large unstructured sparse matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comparison-based diagnosis is an effective approach to system-level fault diagnosis. Under the Maeng-Malek comparison model (NM* model), Sengupta and Dahbura proposed an O(N-5) diagnosis algorithm for general diagnosable systems with N nodes. Thanks to lower diameter and better graph embedding capability as compared with a hypercube of the same size, the crossed cube has been a promising candidate for interconnection networks. In this paper, we propose a fault diagnosis algorithm tailored for crossed cube connected multicomputer systems under the MM* model. By introducing appropriate data structures, this algorithm runs in O(Nlog(2)(2) N) time, which is linear in the size of the input. As a result, this algorithm is significantly superior to the Sengupta-Dahbura's algorithm when applied to crossed cube systems. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe and implement a fully discrete spectral method for the numerical solution of a class of non-linear, dispersive systems of Boussinesq type, modelling two-way propagation of long water waves of small amplitude in a channel. For three particular systems, we investigate properties of the numerically computed solutions; in particular we study the generation and interaction of approximate solitary waves.