887 resultados para weighting triangles


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In school environments, children are constantly exposed to mixtures of airborne substances, derived from a variety of sources, both in the classroom and in the school surroundings. It is important to evaluate the hazardous properties of these mixtures, in order to conduct risk assessments of their impact on chil¬dren’s health. Within this context, through the application of a Maximum Cumulative Ratio approach, this study aimed to explore whether health risks due to indoor air mixtures are driven by a single substance or are due to cumulative exposure to various substances. This methodology requires knowledge of the concentration of substances in the air mixture, together with a health related weighting factor (i.e. reference concentration or lowest concentration of interest), which is necessary to calculate the Hazard Index. Maximum cumulative ratio and Hazard Index values were then used to categorise the mixtures into four groups, based on their hazard potential and therefore, appropriate risk management strategies. Air samples were collected from classrooms in 25 primary schools in Brisbane, Australia. Analysis was conducted based on the measured concentration of these substances in about 300 air samples. The results showed that in 92% of the schools, indoor air mixtures belonged to the ‘low concern’ group and therefore, they did not require any further assessment. In the remaining schools, toxicity was mainly governed by a single substance, with a very small number of schools having a multiple substance mix which required a combined risk assessment. The proposed approach enables the identification of such schools and thus, aides in the efficient health risk management of pollution emissions and air quality in the school environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a novel 12-sided polygonal space vector structure is proposed for an induction motor drive. The space vector pattern presented in this paper consists of two 12-sided concentric polygons with the outer polygon having a radius double the inner one. As compared to previously reported 12-sided polygonal space vector structures, this paper subdivides the space vector plane into smaller sized triangles. This helps in reducing the switching frequency of the inverters without deteriorating the output voltage quality. It also reduces the device ratings and dv/dt stress on the devices to half. At the same time, other benefits obtained from the existing 12-sided space vector structure, such as increased linear modulation range and complete elimination of 5th and 7th order harmonics in the phase voltage, are also retained in this paper. The space vector structure is realized by feeding an open-end induction motor with two conventional three-level neutral point clamped (NPC) inverters with asymmetric isolated dc link voltage sources. The neutral point voltage fluctuations in the three-level NPC inverters are eliminated by utilizing the switching state multiplicities for a space vector point. The pulsewidth modulation timings are calculated using sampled reference waveform amplitudes and are explained in detail in this paper. Experimental verification on a laboratory prototype shows that this configuration may be considered suitable for high power drives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new geometrical method for generating aperiodic lattices forn-fold non-crystallographic axes is described. The method is based on the self-similarity principle. It makes use of the principles of gnomons to divide the basic triangle of a regular polygon of 2n sides to appropriate isosceles triangles and to generate a minimum set of rhombi required to fill that polygon. The method is applicable to anyn-fold noncrystallographic axis. It is first shown how these regular polygons can be obtained and how these can be used to generate aperiodic structures. In particular, the application of this method to the cases of five-fold and seven-fold axes is discussed. The present method indicates that the recursion rule used by others earlier is a restricted one and that several aperiodic lattices with five fold symmetry could be generated. It is also shown how a limited array of approximately square cells with large dimensions could be detected in a quasi lattice and these are compared with the unit cell dimensions of MnAl6 suggested by Pauling. In addition, the recursion rule for sub-dividing the three basic rhombi of seven-fold structure was obtained and the aperiodic lattice thus generated is also shown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines the feasibility of a forest inventory method based on two-phase sampling in estimating forest attributes at the stand or substand levels for forest management purposes. The method is based on multi-source forest inventory combining auxiliary data consisting of remote sensing imagery or other geographic information and field measurements. Auxiliary data are utilized as first-phase data for covering all inventory units. Various methods were examined for improving the accuracy of the forest estimates. Pre-processing of auxiliary data in the form of correcting the spectral properties of aerial imagery was examined (I), as was the selection of aerial image features for estimating forest attributes (II). Various spatial units were compared for extracting image features in a remote sensing aided forest inventory utilizing very high resolution imagery (III). A number of data sources were combined and different weighting procedures were tested in estimating forest attributes (IV, V). Correction of the spectral properties of aerial images proved to be a straightforward and advantageous method for improving the correlation between the image features and the measured forest attributes. Testing different image features that can be extracted from aerial photographs (and other very high resolution images) showed that the images contain a wealth of relevant information that can be extracted only by utilizing the spatial organization of the image pixel values. Furthermore, careful selection of image features for the inventory task generally gives better results than inputting all extractable features to the estimation procedure. When the spatial units for extracting very high resolution image features were examined, an approach based on image segmentation generally showed advantages compared with a traditional sample plot-based approach. Combining several data sources resulted in more accurate estimates than any of the individual data sources alone. The best combined estimate can be derived by weighting the estimates produced by the individual data sources by the inverse values of their mean square errors. Despite the fact that the plot-level estimation accuracy in two-phase sampling inventory can be improved in many ways, the accuracy of forest estimates based mainly on single-view satellite and aerial imagery is a relatively poor basis for making stand-level management decisions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lypsylehmien maidon juoksettumiskyvyn jalostuskeinot Väitöskirjassa tutkittiin lypsylehmien maidon juustonvalmistuslaadun parantamista jalostusvalinnan avulla. Tutkimusaihe on tärkeä, sillä yhä suurempi osa maidosta käytetään juustonvalmistukseen. Tutkimuksen kohteena oli maidon juoksettumiskyky, sillä se on yksi keskeisistä juustomäärään vaikuttavista tekijöistä. Maidon juoksettumiskyky vaihteli huomattavasti lehmien, sonnien, karjojen, rotujen ja lypsykauden vaiheiden välillä. Vaikka tankkimaidon juoksettumiskyvyssä olikin suuria eroja karjoittain, karja selitti vain pienen osan juoksettumiskyvyn kokonaisvaihtelusta. Todennäköisesti perinnölliset erot lehmien välillä selittävät suurimman osan karjojen tankkimaitojen juoksettumiskyvyssä havaituista eroista. Hyvä hoito ja ruokinta vähensivät kuitenkin jossain määrin huonosti juoksettuvien tankkimaitojen osuutta karjoissa. Holstein-friisiläiset lehmät olivat juoksettumiskyvyltään ayrshire-rotuisia lehmiä parempia. Huono juoksettuminen ja juoksettumattomuus oli vain vähäinen ongelma holstein-friisiläisillä (10 %), kun taas kolmannes ayrshire-lehmistä tuotti huonosti juoksettuvaa tai juoksettumatonta maitoa. Maitoa sanotaan huonosti juoksettuvaksi silloin, kun juustomassa ei ole riittävän kiinteää leikattavaksi puolen tunnin kuluttua juoksetteen lisäyksestä. Juoksettumattomaksi määriteltävä maito ei saostu lainkaan puolen tunnin aikana ja on siksi erittäin huonoa raaka-ainetta juustomeijereille. Noin 40 % lehmien välisistä eroista maidon juoksettumiskyvyssä selittyi perinnöllisillä tekijöillä. Juoksettumiskykyä voikin sanoa hyvin periytyväksi ominaisuudeksi. Kolme mittauskertaa lehmää kohti riittää varsin hyvin lehmän maidon keskimääräisen juoksettumiskyvyn arvioimiseen. Tällä hetkellä juoksettumiskyvyn suoran jalostamisen ongelmana on kuitenkin automatisoidun, laajamittaiseen käyttöön soveltuvan mittalaitteen puute. Tämän takia väitöskirjassa tutkittiin mahdollisuuksia jalostaa maidon juoksettumiskykyä epäsuorasti, jonkin toisen ominaisuuden kautta. Tällaisen ominaisuuden pitää olla kyllin voimakkaasti perinnöllisesti kytkeytynyt juoksettumiskykyyn, jotta jalostus olisi mahdollista sen avulla. Tutkittavat ominaisuudet olivat sonnien kokonaisjalostusarvossa jo mukana olevat maitotuotos ja utareterveyteen liittyvät ominaisuudet sekä kokonaisjalostusarvoon kuulumattomat maidon valkuais- ja kaseiinipitoisuus sekä maidon pH. Väitöskirjassa tutkittiin myös mahdollisuuksia ns. merkkiavusteiseen valintaan tutkimalla maidon juoksettumattomuuden perinnöllisyyttä ja kartoittamalla siihen liittyvät kromosomialueet. Tutkimuksen tulosten perusteella lehmien utareterveyden jalostaminen parantaa jonkin verran myös maidon juoksettumiskykyä sekä vähentää juoksettumattomuutta ayrshire-rotuisilla lehmillä. Lehmien maitotuotos ja maidon juoksettumiskyky sekä juoksettumattomuus ovat sen sijaan perinnöllisesti toisistaan riippumattomia ominaisuuksia. Myöskin maidon valkuais- ja kaseiinipitoisuuden perinnöllinen yhteys juoksettumiskykyyn oli likimain nolla. Maidon pH:n ja juoksettumiskyvyn välillä oli melko voimakas perinnöllinen yhteys, joten maidon pH:n jalostaminen parantaisi myös maidon juoksettumiskykyä. Todennäköisesti sen jalostaminen ei kuitenkaan vähentäisi juoksettumatonta maitoa tuottavien lehmien määrää. Koska maidon juoksettumattomuus on niin yleinen ongelma suomalaisilla ayrshire-lehmillä, väitöksessä selvitettiin tarkemmin ilmiön taustoja. Kaikissa kolmessa tutkimusaineistoissa noin 10 % ayrshire-lehmistä tuotti juoksettumatonta maitoa. Kahden vuoden kuukausittaisen seurannan aikana osa lehmistä tuotti juoksettumatonta maitoa lähes joka mittauskerralla. Maidon juoksettumattomuus oli yhteydessä lypsykauden vaiheeseen, mutta mikään ympäristötekijöistä ei pystynyt täysin selittämään sitä. Sen sijaan viitteet sen periytyvyydestä vahvistuivat tutkimusten edetessä. Lopuksi tutkimusryhmä onnistui kartoittamaan juoksettumattomuutta aiheuttavat kromosomialueet kromosomeihin 2 ja 18, lähelle DNA-merkkejä BMS1126 ja BMS1355. Tulosten perusteella maidon juoksettumattomuus ei ole yhteydessä maidon juoksettumistapahtumassa keskeisiin kaseiinigeeneihin. Sen sijaan on mahdollista, että juoksettumattomuusongelman aiheuttavat kaseiinigeenien syntetisoinnin jälkeisessä muokkauksessa tapahtuvat virheet. Asia vaatii kuitenkin perusteellista tutkimista. Väitöksen tulosten perusteella maidon juoksettumattomuusgeeniä kantavien eläinten karsiminen jalostuseläinten joukosta olisi tehokkain tapa jalostaa maidon juoksettumiskykyä suomalaisessa lypsykarjapopulaatiossa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lipid analysis is commonly performed by gas chromatography (GC) in laboratory conditions. Spectroscopic techniques, however, are non-destructive and can be implemented noninvasively in vivo. Excess fat (triglycerides) in visceral adipose tissue and liver is known predispose to metabolic abnormalities, collectively known as the metabolic syndrome. Insulin resistance is the likely cause with diets high in saturated fat known to impair insulin sensitivity. Tissue triglyceride composition has been used as marker of dietary intake but it can also be influenced by tissue specific handling of fatty acids. Recent studies have shown that adipocyte insulin sensitivity correlates positively with their saturated fat content, contradicting the common view of dietary effects. A better understanding of factors affecting tissue triglyceride composition is needed to provide further insights into tissue function in lipid metabolism. In this thesis two spectroscopic techniques were developed for in vitro and in vivo analysis of tissue triglyceride composition. In vitro studies (Study I) used infrared spectroscopy (FTIR), a fast and cost effective analytical technique well suited for multivariate analysis. Infrared spectra are characterized by peak overlap leading to poorly resolved absorbances and limited analytical performance. In vivo studies (Studies II, III and IV) used proton magnetic resonance spectroscopy (1H-MRS), an established non-invasive clinical method for measuring metabolites in vivo. 1H-MRS has been limited in its ability to analyze triglyceride composition due to poorly resolved resonances. Using an attenuated total reflection accessory, we were able to obtain pure triglyceride infrared spectra from adipose tissue biopsies. Using multivariate curve resolution (MCR), we were able to resolve the overlapping double bond absorbances of monounsaturated fat and polyunsaturated fat. MCR also resolved the isolated trans double bond and conjugated linoleic acids from an overlapping background absorbance. Using oil phantoms to study the effects of different fatty acid compositions on the echo time behaviour of triglycerides, it was concluded that the use of long echo times improved peak separation with T2 weighting having a negligible impact. It was also discovered that the echo time behaviour of the methyl resonance of omega-3 fats differed from other fats due to characteristic J-coupling. This novel insight could be used to detect omega-3 fats in human adipose tissue in vivo at very long echo times (TE = 470 and 540 ms). A comparison of 1H-MRS of adipose tissue in vivo and GC of adipose tissue biopsies in humans showed that long TE spectra resulted in improved peak fitting and better correlations with GC data. The study also showed that calculation of fatty acid fractions from 1H-MRS data is unreliable and should not be used. Omega-3 fatty acid content derived from long TE in vivo spectra (TE = 540 ms) correlated with total omega-3 fatty acid concentration measured by GC. The long TE protocol used for adipose tissue studies was subsequently extended to the analysis of liver fat composition. Respiratory triggering and long TE resulted in spectra with the olefinic and tissue water resonances resolved. Conversion of the derived unsaturation to double bond content per fatty acid showed that the results were in accordance with previously published gas chromatography data on liver fat composition. In patients with metabolic syndrome, liver fat was found to be more saturated than subcutaneous or visceral adipose tissue. The higher saturation observed in liver fat may be a result of a higher rate of de-novo-lipogenesis in liver than in adipose tissue. This thesis has introduced the first non-invasive method for determining adipose tissue omega-3 fatty acid content in humans in vivo. The methods introduced here have also shown that liver fat is more saturated than adipose tissue fat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a novel slicing based method for computation of volume fractions in multi-material solids given as a B-rep whose faces are triangulated and shared by either one or two materials. Such objects occur naturally in geoscience applications and the said computation is necessary for property estimation problems and iterative forward modeling. Each facet in the model is cut by the planes delineating the given grid structure or grid cells. The method, instead of classifying the points or cells with respect to the solid, exploits the convexity of triangles and the simple axis-oriented disposition of the cutting surfaces to construct a novel intermediate space enumeration representation called slice-representation, from which both the cell containment test and the volume-fraction computation are done easily. Cartesian and cylindrical grids with uniform and non-uniform spacings have been dealt with in this paper. After slicing, each triangle contributes polygonal facets, with potential elliptical edges, to the grid cells through which it passes. The volume fractions of different materials in a grid cell that is in interaction with the material interfaces are obtained by accumulating the volume contributions computed from each facet in the grid cell. The method is fast, accurate, robust and memory efficient. Examples illustrating the method and performance are included in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes an approach based on Zernike moments and Delaunay triangulation for localization of hand-written text in machine printed text documents. The Zernike moments of the image are first evaluated and we classify the text as hand-written using the nearest neighbor classifier. These features are independent of size, slant, orientation, translation and other variations in handwritten text. We then use Delaunay triangulation to reclassify the misclassified text regions. When imposing Delaunay triangulation on the centroid points of the connected components, we extract features based on the triangles and reclassify the text. We remove the noise components in the document as part of the preprocessing step so this method works well on noisy documents. The success rate of the method is found to be 86%. Also for specific hand-written elements such as signatures or similar text the accuracy is found to be even higher at 93%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Traffic offences have been considered an important predictor of crash involvement, and have often been used as a proxy safety variable for crashes. However the association between crashes and offences has never been meta-analysed and the population effect size never established. Research is yet to determine the extent to which this relationship may be spuriously inflated through systematic measurement error, with obvious implications for researchers endeavouring to accurately identify salient factors predictive of crashes. Methodology and Principal Findings Studies yielding a correlation between crashes and traffic offences were collated and a meta-analysis of 144 effects drawn from 99 road safety studies conducted. Potential impact of factors such as age, time period, crash and offence rates, crash severity and data type, sourced from either self-report surveys or archival records, were considered and discussed. After weighting for sample size, an average correlation of r = .18 was observed over the mean time period of 3.2 years. Evidence emerged suggesting the strength of this correlation is decreasing over time. Stronger correlations between crashes and offences were generally found in studies involving younger drivers. Consistent with common method variance effects, a within country analysis found stronger effect sizes in self-reported data even controlling for crash mean. Significance The effectiveness of traffic offences as a proxy for crashes may be limited. Inclusion of elements such as independently validated crash and offence histories or accurate measures of exposure to the road would facilitate a better understanding of the factors that influence crash involvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present measurements of the top quark mass using the \mT2, a variable related to the transverse mass in events with two missing particles. We use the template method applied to t\tbar dilepton events produced in p\pbar collisions at Fermilab's Tevatron and collected by the CDF detector. From a data sample corresponding to an integrated luminosity of 3.4 \invfb, we select 236 t\tbar candidate events. Using the \mT2 distribution, we measure the top quark mass to be M_{Top} = 168.0^{+4.8}_{-4.0} $\pm$ {2.9} GeV/c^{2}. By combining the \mT2 with the reconstructed top mass distributions based on a neutrino weighting method, we measure M_{top}=169.3 $\pm$ 2.7 $\pm$ 3.2 GeV/c^{2}. This is the first application of the \mT2 variable in a mass measurement at a hadron collider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lipid analysis is commonly performed by gas chromatography (GC) in laboratory conditions. Spectroscopic techniques, however, are non-destructive and can be implemented noninvasively in vivo. Excess fat (triglycerides) in visceral adipose tissue and liver is known predispose to metabolic abnormalities, collectively known as the metabolic syndrome. Insulin resistance is the likely cause with diets high in saturated fat known to impair insulin sensitivity. Tissue triglyceride composition has been used as marker of dietary intake but it can also be influenced by tissue specific handling of fatty acids. Recent studies have shown that adipocyte insulin sensitivity correlates positively with their saturated fat content, contradicting the common view of dietary effects. A better understanding of factors affecting tissue triglyceride composition is needed to provide further insights into tissue function in lipid metabolism. In this thesis two spectroscopic techniques were developed for in vitro and in vivo analysis of tissue triglyceride composition. In vitro studies (Study I) used infrared spectroscopy (FTIR), a fast and cost effective analytical technique well suited for multivariate analysis. Infrared spectra are characterized by peak overlap leading to poorly resolved absorbances and limited analytical performance. In vivo studies (Studies II, III and IV) used proton magnetic resonance spectroscopy (1H-MRS), an established non-invasive clinical method for measuring metabolites in vivo. 1H-MRS has been limited in its ability to analyze triglyceride composition due to poorly resolved resonances. Using an attenuated total reflection accessory, we were able to obtain pure triglyceride infrared spectra from adipose tissue biopsies. Using multivariate curve resolution (MCR), we were able to resolve the overlapping double bond absorbances of monounsaturated fat and polyunsaturated fat. MCR also resolved the isolated trans double bond and conjugated linoleic acids from an overlapping background absorbance. Using oil phantoms to study the effects of different fatty acid compositions on the echo time behaviour of triglycerides, it was concluded that the use of long echo times improved peak separation with T2 weighting having a negligible impact. It was also discovered that the echo time behaviour of the methyl resonance of omega-3 fats differed from other fats due to characteristic J-coupling. This novel insight could be used to detect omega-3 fats in human adipose tissue in vivo at very long echo times (TE = 470 and 540 ms). A comparison of 1H-MRS of adipose tissue in vivo and GC of adipose tissue biopsies in humans showed that long TE spectra resulted in improved peak fitting and better correlations with GC data. The study also showed that calculation of fatty acid fractions from 1H-MRS data is unreliable and should not be used. Omega-3 fatty acid content derived from long TE in vivo spectra (TE = 540 ms) correlated with total omega-3 fatty acid concentration measured by GC. The long TE protocol used for adipose tissue studies was subsequently extended to the analysis of liver fat composition. Respiratory triggering and long TE resulted in spectra with the olefinic and tissue water resonances resolved. Conversion of the derived unsaturation to double bond content per fatty acid showed that the results were in accordance with previously published gas chromatography data on liver fat composition. In patients with metabolic syndrome, liver fat was found to be more saturated than subcutaneous or visceral adipose tissue. The higher saturation observed in liver fat may be a result of a higher rate of de-novo-lipogenesis in liver than in adipose tissue. This thesis has introduced the first non-invasive method for determining adipose tissue omega-3 fatty acid content in humans in vivo. The methods introduced here have also shown that liver fat is more saturated than adipose tissue fat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present measurements of the top quark mass using the \mT2, a variable related to the transverse mass in events with two missing particles. We use the template method applied to t\tbar dilepton events produced in p\pbar collisions at Fermilab's Tevatron and collected by the CDF detector. From a data sample corresponding to an integrated luminosity of 3.4 \invfb, we select 236 t\tbar candidate events. Using the \mT2 distribution, we measure the top quark mass to be M_{Top} = 168.0^{+4.8}_{-4.0} $\pm$ {2.9} GeV/c^{2}. By combining the \mT2 with the reconstructed top mass distributions based on a neutrino weighting method, we measure M_{top}=169.3 $\pm$ 2.7 $\pm$ 3.2 GeV/c^{2}. This is the first application of the \mT2 variable in a mass measurement at a hadron collider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today finite element method is a well established tool in engineering analysis and design. Though there axe many two and three dimensional finite elements available, it is rare that a single element performs satisfactorily in majority of practical problems. The present work deals with the development of 4-node quadrilateral element using extended Lagrange interpolation functions. The classical univariate Lagrange interpolation is well developed for 1-D and is used for obtaining shape functions. We propose a new approach to extend the Lagrange interpolation to several variables. When variables axe more than one the method also gives the set of feasible bubble functions. We use the two to generate shape function for the 4-node arbitrary quadrilateral. It will require the incorporation of the condition of rigid body motion, constant strain and Navier equation by imposing necessary constraints. The procedure obviates the need for isoparametric transformation since interpolation functions are generated for arbitrary quadrilateral shapes. While generating the element stiffness matrix, integration can be carried out to the accuracy desired by dividing the quadrilateral into triangles. To validate the performance of the element which we call EXLQUAD4, we conduct several pathological tests available in the literature. EXLQUAD4 predicts both stresses and displacements accurately at every point in the element in all the constant stress fields. In tests involving higher order stress fields the element is assured to converge in the limit of discretisation. A method thus becomes available to generate shape functions directly for arbitrary quadrilateral. The method is applicable also for hexahedra. The approach should find use for development of finite elements for use with other field equations also.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconstructions in optical tomography involve obtaining the images of absorption and reduced scattering coefficients. The integrated intensity data has greater sensitivity to absorption coefficient variations than scattering coefficient. However, the sensitivity of intensity data to scattering coefficient is not zero. We considered an object with two inhomogeneities (one in absorption and the other in scattering coefficient). The standard iterative reconstruction techniques produced results, which were plagued by cross talk, i.e., the absorption coefficient reconstruction has a false positive corresponding to the location of scattering inhomogeneity, and vice-versa. We present a method to remove cross talk in the reconstruction, by generating a weight matrix and weighting the update vector during the iteration. The weight matrix is created by the following method: we first perform a simple backprojection of the difference between the experimental and corresponding homogeneous intensity data. The built up image has greater weightage towards absorption inhomogeneity than the scattering inhomogeneity and its appropriate inverse is weighted towards the scattering inhomogeneity. These two weight matrices are used as multiplication factors in the update vectors, normalized backprojected image of difference intensity for absorption inhomogeneity and the inverse of the above for the scattering inhomogeneity, during the image reconstruction procedure. We demonstrate through numerical simulations, that cross-talk is fully eliminated through this modified reconstruction procedure.