1000 resultados para sensitivity matrix


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider hybrid (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion (MI) and Solving Systems of Linear Equations (SLAE). Monte Carlo methods are used for the stochastic approximation, since it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix or finding a component of the solution vector. We show how the stochastic approximation of the MI can be combined with a deterministic refinement procedure to obtain MI with the required precision and further solve the SLAE using MI. We employ a splitting A = D – C of a given non-singular matrix A, where D is a diagonal dominant matrix and matrix C is a diagonal matrix. In our algorithm for solving SLAE and MI different choices of D can be considered in order to control the norm of matrix T = D –1C, of the resulting SLAE and to minimize the number of the Markov Chains required to reach given precision. Further we run the algorithms on a mini-Grid and investigate their efficiency depending on the granularity. Corresponding experimental results are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present error analysis for a Monte Carlo algorithm for evaluating bilinear forms of matrix powers. An almost Optimal Monte Carlo (MAO) algorithm for solving this problem is formulated. Results for the structure of the probability error are presented and the construction of robust and interpolation Monte Carlo algorithms are discussed. Results are presented comparing the performance of the Monte Carlo algorithm with that of a corresponding deterministic algorithm. The two algorithms are tested on a well balanced matrix and then the effects of perturbing this matrix, by small and large amounts, is studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ashby was a keen observer of the world around him, as per his technological and psychiatrical developments. Over the years, he drew numerous philosophical conclusions on the nature of human intelligence and the operation of the brain, on artificial intelligence and the thinking ability of computers and even on science in general. In this paper, the quite profound philosophy espoused by Ashby is considered as a whole, in particular in terms of its relationship with the world as it stands now and even in terms of scientific predictions of where things might lead. A meaningful comparison is made between Ashby's comments and the science fiction concept of 'The Matrix' and serious consideration is given as to how much Ashby's ideas lay open the possibility of the matrix becoming a real world eventuality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The production and release of dissolved organic carbon (DOC) from peat soils is thought to be sensitive to changes in climate, specifically changes in temperature and rainfall. However, little is known about the actual rates of net DOC production in response to temperature and water table draw-down, particularly in comparison to carbon dioxide (CO2) fluxes. To explore these relationships, we carried out a laboratory experiment on intact peat soil cores under controlled temperature and water table conditions to determine the impact and interaction of each of these climatic factors on net DOC production. We found a significant interaction (P < 0.001) between temperature, water table draw-down and net DOC production across the whole soil core (0 to −55 cm depth). This corresponded to an increase in the Q10 (i.e. rise in the rate of net DOC production over a 10 °C range) from 1.84 under high water tables and anaerobic conditions to 3.53 under water table draw-down and aerobic conditions between −10 and − 40 cm depth. However, increases in net DOC production were only seen after water tables recovered to the surface as secondary changes in soil water chemistry driven by sulphur redox reactions decreased DOC solubility, and therefore DOC concentrations, during periods of water table draw-down. Furthermore, net microbial consumption of DOC was also apparent at − 1 cm depth and was an additional cause of declining DOC concentrations during dry periods. Therefore, although increased temperature and decreased rainfall could have a significant effect on net DOC release from peatlands, these climatic effects could be masked by other factors controlling the biological consumption of DOC in addition to soil water chemistry and DOC solubility. These findings highlight both the sensitivity of DOC release from ombrotrophic peat to episodic changes in water table draw-down, and the need to disentangle complex and interacting controls on DOC dynamics to fully understand the impact of environmental change on this system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Insulin sensitivity (Si) is improved by weight loss and exercise, but the effects of the replacement of saturated fatty acids (SFAs) with monounsaturated fatty acids (MUFAs) or carbohydrates of high glycemic index (HGI) or low glycemic index (LGI) are uncertain. Objective: We conducted a dietary intervention trial to study these effects in participants at risk of developing metabolic syndrome. Design: We conducted a 5-center, parallel design, randomized controlled trial [RISCK (Reading, Imperial, Surrey, Cambridge, and Kings)]. The primary and secondary outcomes were changes in Si (measured by using an intravenous glucose tolerance test) and cardiovascular risk factors. Measurements were made after 4 wk of a high-SFA and HGI (HS/HGI) diet and after a 24-wk intervention with HS/HGI (reference), high-MUFA and HGI (HM/HGI), HM and LGI (HM/LGI), low-fat and HGI (LF/HGI), and LF and LGI (LF/LGI) diets. Results: We analyzed data for 548 of 720 participants who were randomly assigned to treatment. The median Si was 2.7 × 10−4 mL · μU−1 · min−1 (interquartile range: 2.0, 4.2 × 10−4 mL · μU−1 · min−1), and unadjusted mean percentage changes (95% CIs) after 24 wk treatment (P = 0.13) were as follows: for the HS/HGI group, −4% (−12.7%, 5.3%); for the HM/HGI group, 2.1% (−5.8%, 10.7%); for the HM/LGI group, −3.5% (−10.6%, 4.3%); for the LF/HGI group, −8.6% (−15.4%, −1.1%); and for the LF/LGI group, 9.9% (2.4%, 18.0%). Total cholesterol (TC), LDL cholesterol, and apolipoprotein B concentrations decreased with SFA reduction. Decreases in TC and LDL-cholesterol concentrations were greater with LGI. Fat reduction lowered HDL cholesterol and apolipoprotein A1 and B concentrations. Conclusions: This study did not support the hypothesis that isoenergetic replacement of SFAs with MUFAs or carbohydrates has a favorable effect on Si. Lowering GI enhanced reductions in TC and LDL-cholesterol concentrations in subjects, with tentative evidence of improvements in Si in the LF-treatment group. This trial was registered at clinicaltrials.gov as ISRCTN29111298.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Substituted amphetamines such as p-chloroamphetamine and the abused drug methylenedioxymethamphetamine cause selective destruction of serotonin axons in rats, by unknown mechanisms. Since some serotonin neurones also express neuronal nitric oxide synthase, which has been implicated in neurotoxicity, the present study was undertaken to determine whether nitric oxide synthase expressing serotonin neurones are selectively vulnerable to methylenedioxymethamphetamine or p-chloroamphetamine. Using double-labeling immunocytochemistry and double in situ hybridization for nitric oxide synthase and the serotonin transporter, it was confirmed that about two thirds of serotonergic cell bodies in the dorsal raphe nucleus expressed nitric oxide synthase, however few if any serotonin transporter immunoreactive axons in striatum expressed nitric oxide synthase at detectable levels. Methylenedioxymethamphetamine (30 mg/kg) or p-chloroamphetamine (2 x 10 mg/kg) was administered to Sprague-Dawley rats, and 7 days after drug administration there were modest decreases in the levels of serotonin transporter protein in frontal cortex, and striatum using Western blotting, even though axonal loss could be clearly seen by immunostaining. p-Chloroamphetamine or methylenedioxymethamphetamine administration did not alter the level of nitric oxide synthase in striatum or frontal cortex, determined by Western blotting. Analysis of serotonin neuronal cell bodies 7 days after p-chloroamphetamine treatment, revealed a net down-regulation of serotonin transporter mRNA levels, and a profound change in expression of nitric oxide synthase, with 33% of serotonin transporter mRNA positive cells containing nitric oxide synthase mRNA, compared with 65% in control animals. Altogether these results support the hypothesis that serotonin neurones which express nitric oxide synthase are most vulnerable to substituted amphetamine toxicity, supporting the concept that the selective vulnerability of serotonin neurones has a molecular basis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background:Excessive energy intake and obesity lead to the metabolic syndrome (MetS). Dietary saturated fatty acids (SFAs) may be particularly detrimental on insulin sensitivity (SI) and on other components of the MetS. Objective:This study determined the relative efficacy of reducing dietary SFA, by isoenergetic alteration of the quality and quantity of dietary fat, on risk factors associated with MetS. Design:A free-living, single-blinded dietary intervention study. Subjects and Methods:MetS subjects (n=417) from eight European countries completed the randomized dietary intervention study with four isoenergetic diets distinct in fat quantity and quality: high-SFA; high-monounsaturated fatty acids and two low-fat, high-complex carbohydrate (LFHCC) diets, supplemented with long chain n-3 polyunsaturated fatty acids (LC n-3 PUFAs) (1.2 g per day) or placebo for 12 weeks. SI estimated from an intravenous glucose tolerance test (IVGTT) was the primary outcome measure. Lipid and inflammatory markers associated with MetS were also determined. Results:In weight-stable subjects, reducing dietary SFA intake had no effect on SI, total and low-density lipoprotein cholesterol concentration, inflammation or blood pressure in the entire cohort. The LFHCC n-3 PUFA diet reduced plasma triacylglycerol (TAG) and non-esterified fatty acid concentrations (P<0.01), particularly in men. Conclusion:There was no effect of reducing SFA on SI in weight-stable obese MetS subjects. LC n-3 PUFA supplementation, in association with a low-fat diet, improved TAG-related MetS risk profiles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel approach to calculating Low-Energy Electron Diffraction (LEED) intensities for ordered molecular adsorbates. First, the intra-molecular multiple scattering is computed to obtain a non-diagonal molecular T-matrix. This is then used to represent the entire molecule as a single scattering object in a conventional LEED calculation, where the Layer Doubling technique is applied to assemble the different layers, including the molecular ones. A detailed comparison with conventional layer-type LEED calculations is provided to ascertain the accuracy of this scheme of calculation. Advantages of this scheme for problems involving ordered arrays of molecules adsorbed on surfaces are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.