999 resultados para Matrix Decompositions
Resumo:
In this work, IR thermography is used as a non-destructive tool for impact damage characterisation on thermoplastic E-glass/polypropylene composites for automotive applications. The aim of this experimentation was to compare impact resistance and to characterise damage patterns of different laminates, in order to provide indications for their use in components. Two E-glass/polypropylene composites, commingled ®Twintex (with three different weave structures: directional, balanced and 3-D) and random reinforced GMT, were in particular characterised. Directional and balanced Twintex were also coupled in a number of hybrid configurations with GMT to evaluate the possible use of GMT/Twintex hybrids in high-energy absorption components. The laminates were impacted using a falling weight tower, with impact energies ranging from 15 J to penetration. Using IR thermography during cooling down following a long pulse (3 s), impact damaged areas were characterised and the influence of weave structure on damage patterns was studied. IR thermography offered good accuracy for laminates with thickness not exceeding 3.5 mm: this appears to be a limit for the direct use of this method on components, where more refined signal treatment would probably be needed for impact damage characterisation.
Resumo:
A combined mathematical model for predicting heat penetration and microbial inactivation in a solid body heated by conduction was tested experimentally by inoculating agar cylinders with Salmonella typhimurium or Enterococcus faecium and heating in a water bath. Regions of growth where bacteria had survived after heating were measured by image analysis and compared with model predictions. Visualisation of the regions of growth was improved by incorporating chromogenic metabolic indicators into the agar. Preliminary tests established that the model performed satisfactorily with both test organisms and with cylinders of different diameter. The model was then used in simulation studies in which the parameters D, z, inoculum size, cylinder diameter and heating temperature were systematically varied. These simulations showed that the biological variables D, z and inoculum size had a relatively small effect on the time needed to eliminate bacteria at the cylinder axis in comparison with the physical variables heating temperature and cylinder diameter, which had a much greater relative effect. (c) 2005 Elsevier B.V All rights reserved.
Resumo:
Vitamin E absorption requires the presence of fat; however, limited information exists on the influence of fat quantity on optimal absorption. In the present study we compared the absorption of stable-isotope-labelled vitamin E following meals of varying fat content and source. In a randomised four-way cross-over study, eight healthy individuals consumed a capsule containing 150 mg H-2-labelled RRR-alpha-tocopheryl acetate with a test meal of toast with butter (17.5 g fat), cereal with full-fat milk (17.5 g fat), cereal with semi-skimmed milk (2.7 g fat) and water (0g fat). Blood was taken at 0, 0.5, 1, 1.5, 2, 3, 6 and 9 h following ingestion, chylomicrons were isolated, and H-2-labelled alpha-tocopherol was analysed in the chylomicron and plasma samples. There was a significant time (P<0.001) and treatment effect (P<0.001) in H-2-labelled alpha-tocopherol concentration in both chylomicrons and plasma between the test meals. H-2-labelled alpha-tocopherol concentration was significantly greater with the higher-fat toast and butter meal compared with the low-fat cereal meal or water (P< 0.001), and a trend towards greater concentration compared with the high-fat cereal meal (P= 0.065). There was significantly greater H-2-labelled α-tocopherol concentration with the high-fat cereal meal compared with the low-fat cereal meal (P< 0.05). The H-2-labelled alpha-tocopherol concentration following either the low-fat cereal meal or water was low. These results demonstrate that both the amount of fat and the food matrix influence vitamin E absorption. These factors should be considered by consumers and for future vitamin E intervention studies.
Resumo:
If soy isoflavones are to be effective in preventing or treating a range of diseases, they must be bioavailable, and thus understanding factors which may alter their bioavailability needs to be elucidated. However, to date there is little information on whether the pharmacokinetic profile following ingestion of a defined dose is influenced by the food matrix in which the isoflavone is given or by the processing method used. Three different foods (cookies, chocolate bars and juice) were prepared, and their isoflavone contents were determined. We compared the urinary and serum concentrations of daidzein, genistein and equol following the consumption of three different foods, each of which contained 50 mg of isoflavones. After the technological processing of the different test foods, differences in aglycone levels were observed. The plasma levels of the isoflavone precursor daidzein were not altered by food matrix. Urinary daidzein recovery was similar for all three foods ingested with total urinary output of 33-34% of ingested dose. Peak genistein concentrations were attained in serum earlier following consumption of a liquid matrix rather than a solid matrix, although there was a lower total urinary recovery of genistein following ingestion of juice than that of the two other foods. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
In this paper we consider hybrid (fast stochastic approximation and deterministic refinement) algorithms for Matrix Inversion (MI) and Solving Systems of Linear Equations (SLAE). Monte Carlo methods are used for the stochastic approximation, since it is known that they are very efficient in finding a quick rough approximation of the element or a row of the inverse matrix or finding a component of the solution vector. We show how the stochastic approximation of the MI can be combined with a deterministic refinement procedure to obtain MI with the required precision and further solve the SLAE using MI. We employ a splitting A = D – C of a given non-singular matrix A, where D is a diagonal dominant matrix and matrix C is a diagonal matrix. In our algorithm for solving SLAE and MI different choices of D can be considered in order to control the norm of matrix T = D –1C, of the resulting SLAE and to minimize the number of the Markov Chains required to reach given precision. Further we run the algorithms on a mini-Grid and investigate their efficiency depending on the granularity. Corresponding experimental results are presented.
Resumo:
Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.
Resumo:
In this paper we present error analysis for a Monte Carlo algorithm for evaluating bilinear forms of matrix powers. An almost Optimal Monte Carlo (MAO) algorithm for solving this problem is formulated. Results for the structure of the probability error are presented and the construction of robust and interpolation Monte Carlo algorithms are discussed. Results are presented comparing the performance of the Monte Carlo algorithm with that of a corresponding deterministic algorithm. The two algorithms are tested on a well balanced matrix and then the effects of perturbing this matrix, by small and large amounts, is studied.
Resumo:
Ashby was a keen observer of the world around him, as per his technological and psychiatrical developments. Over the years, he drew numerous philosophical conclusions on the nature of human intelligence and the operation of the brain, on artificial intelligence and the thinking ability of computers and even on science in general. In this paper, the quite profound philosophy espoused by Ashby is considered as a whole, in particular in terms of its relationship with the world as it stands now and even in terms of scientific predictions of where things might lead. A meaningful comparison is made between Ashby's comments and the science fiction concept of 'The Matrix' and serious consideration is given as to how much Ashby's ideas lay open the possibility of the matrix becoming a real world eventuality.
Resumo:
In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.