896 resultados para bone density distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les distributions spatiales des racines fines de Quercus rubra L. (CHR), Populus deltoides x nigra (DN3570) (PEH) et d’une culture fourragère (FOUR) ont été étudiées dans un système agroforestier de culture intercalaire (SCI) du sud du Québec (Canada). L’étude ne révèle pas d’enracinement plus profond des arbres en SCI, mais des profils superficiels à l’instar de nombreuses espèces d’arbres en plantations ou en milieu naturel. Une séparation spatiale existe entre les systèmes racinaires du FOUR et des CHR dont la densité relative selon la profondeur est plus faible que celle de la culture de 0 à 10 cm, mais plus élevée de 10 à 30 cm. Les PEH ne présentent pas d’adaptation racinaire et les hautes valeurs de densités de longueur racinaires (FRLD) de surface près du tronc entraînent une diminution de 45 % de la densité racinaire de surface du fourrage, suggérant une forte compétition pour les ressources du sol. L’étude du rendement agricole a d’ailleurs révélé des réductions de biomasse fourragère particulièrement près des PEH. Cependant, les résultats d’une analyse à composantes principales suggèrent un impact secondaire de la compétition racinaire sur le rendement agricole, et une plus grande importance de la compétition pour la lumière. L’impact des PEH à croissance rapide sur la culture est plus grand que celui du CHR. Cependant, ils seront récoltés plus rapidement et l’espace libéré favorisera la croissance de la culture intercalaire. Cet aspect dynamique des SCI les rapproche des écosystèmes naturels et devrait être réfléchi et approfondi pour leur succès futur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nous avons cherché des relations empiriques entre l’abondance des macrophytes submergés et le développement résidentiel du bassin versant, les propriétés du lac et la présence de milieux humides dans 34 lacs de la région des Laurentides et de Lanaudière sélectionnés à travers un gradient de développement résidentiel. Les macrophytes submergés ont été échantillonnés par méthode d’échosondage à l’intérieur de la zone littorale. L’abondance moyenne des macrophytes a ensuite été estimée à l’intérieur de quatre zones de croissance optiquement définies (profondeur maximale = 75 %, 100 %, 125 % et 150 % de la profondeur de Secchi) ainsi qu’à l’intérieur de toute la zone littorale. L’occupation humaine a été considérée selon trois échelles spatiales : celle présente 1- dans un rayon de 100 mètres autour du lac, 2- dans la fraction du bassin versant qui draine directement vers le lac et 3- dans le bassin versant en entier. Nous avons aussi testé, lac par lac, l’effet de la pente locale sur l’abondance des macrophytes. Nous avons observé des corrélations positives et significatives entre l’abondance des macrophytes submergés et l’occupation humaine de l’aire de drainage direct (r > 0.51). Toutefois, il n’y a pas de relation entre l’abondance des macrophytes submergés et l’occupation humaine de la bande de 100 mètres entourant le lac et du bassin versant entier. Les analyses de régression multiple suggèrent que l’abondance des macrophytes submergés est faiblement corrélée avec l’aire du lac (+) et avec la présence de milieux humides dans le bassin versant entier (-). Localement, l’abondance des macrophytes est reliée à la pente et à la profondeur qui expliquent 21% de la variance. Les profondeurs de colonisation maximale et optimale des macrophytes submergés sont corrélées positivement au temps de résidence et à la profondeur de Secchi et négativement à l’occupation humaine et à l’importance des milieux humides.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple method is presented to evaluate the effects of short-range correlations on the momentum distribution of nucleons in nuclear matter within the framework of the Greens function approach. The method provides a very efficient representation of the single-particle Greens function for a correlated system. The reliability of this method is established by comparing its results to those obtained in more elaborate calculations. The sensitivity of the momentum distribution on the nucleon-nucleon interaction and the nuclear density is studied. The momentum distributions of nucleons in finite nuclei are derived from those in nuclear matter using a local-density approximation. These results are compared to those obtained directly for light nuclei like 16O.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the technologies for the fabrication of high quality microarray advances rapidly, quantification of microarray data becomes a major task. Gridding is the first step in the analysis of microarray images for locating the subarrays and individual spots within each subarray. For accurate gridding of high-density microarray images, in the presence of contamination and background noise, precise calculation of parameters is essential. This paper presents an accurate fully automatic gridding method for locating suarrays and individual spots using the intensity projection profile of the most suitable subimage. The method is capable of processing the image without any user intervention and does not demand any input parameters as many other commercial and academic packages. According to results obtained, the accuracy of our algorithm is between 95-100% for microarray images with coefficient of variation less than two. Experimental results show that the method is capable of gridding microarray images with irregular spots, varying surface intensity distribution and with more than 50% contamination

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. The algorithm is based upon a Support Vector Machine (SVM) approach to solving inverse operator problems. The algorithm is implemented and tested on simulated data from different distributions and different dimensionalities, gaussians and laplacians in $R^2$ and $R^{12}$. A comparison in performance is made with Gaussian Mixture Models (GMMs). Our algorithm does as well or better than the GMMs for the simulations tested and has the added advantage of being automated with respect to parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High density, uniform GaN nanodot arrays with controllable size have been synthesized by using template-assisted selective growth. The GaN nanodots with average diameter 40nm, 80nm and 120nm were selectively grown by metalorganic chemical vapor deposition (MOCVD) on a nano-patterned SiO2/GaN template. The nanoporous SiO2 on GaN surface was created by inductively coupled plasma etching (ICP) using anodic aluminum oxide (AAO) template as a mask. This selective regrowth results in highly crystalline GaN nanodots confirmed by high resolution transmission electron microscopy. The narrow size distribution and uniform spatial position of the nanoscale dots offer potential advantages over self-assembled dots grown by the Stranski–Krastanow mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The algebraic-geometric structure of the simplex, known as Aitchison geometry, is used to look at the Dirichlet family of distributions from a new perspective. A classical Dirichlet density function is expressed with respect to the Lebesgue measure on real space. We propose here to change this measure by the Aitchison measure on the simplex, and study some properties and characteristic measures of the resulting density

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel density estimation techniques in the context of compositional data analysis. Indeed, they gave two options for the choice of the kernel to be used in the kernel estimator. One of these kernels is based on the use the alr transformation on the simplex SD jointly with the normal distribution on RD-1. However, these authors themselves recognized that this method has some deficiencies. A method for overcoming these dificulties based on recent developments for compositional data analysis and multivariate kernel estimation theory, combining the ilr transformation with the use of the normal density with a full bandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu- Figueras (2006). Here we present an extensive simulation study that compares both methods in practice, thus exploring the finite-sample behaviour of both estimators

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The boreal forest of western Canada is being dissected by seismic lines used for oil and gas exploration. The vast amount of edge being created is leading to concerns that core habitat will be reduced for forest interior species for extended periods of time. The Ovenbird (Seiurus aurocapilla) is a boreal songbird known to be sensitive to newly created seismic lines because it does not include newly cut lines within its territory. We examined multiple hypotheses to explain potential mechanisms causing this behavior by mapping Ovenbird territories near lines with varying states of vegetation regeneration. The best model to explain line exclusion behavior included the number of neighboring conspecifics, the amount of bare ground, leaf-litter depth, and canopy closure. Ovenbirds exclude recently cut seismic lines from their territories because of lack of protective cover (lower tree and shrub cover) and because of reduced food resources due to large areas of bare ground. Food reduction and perceived predation risk effects seem to be mitigated once leaf litter (depth and extent of cover) and woody vegetation cover are restored to forest interior levels. However, as conspecific density increases, lines are more likely to be used as landmarks to demarcate territorial boundaries, even when woody vegetation cover and leaf litter are restored. This behavior can reduce territory density near seismic lines by changing the spatial distribution of territories. Landmark effects are longer lasting than the effects from reduced food or perceived predation risk because canopy height and tree density take >40 years to recover to forest interior levels. Mitigation of seismic line impacts on Ovenbirds should focus on restoring forest cover as quickly as possible after line cutting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A climatology of extratropical cyclones is produced using an objective method of identifying cyclones based on gradients of 1-km height wet-bulb potential temperature. Cyclone track and genesis density statistics are analyzed and this method is found to compare well with other cyclone identification methods. The North Atlantic storm track is reproduced along with the major regions of genesis. Cyclones are grouped according to their genesis location and the corresponding lysis regions are identified. Most of the cyclones that cross western Europe originate in the east Atlantic where the baroclinicity and the sea surface temperature gradients are weak compared to the west Atlantic. East Atlantic cyclones also have higher 1-km height relative vorticity and lower mean sea level pressure at their genesis point than west Atlantic cyclones. This is consistent with the hypothesis that they are secondary cyclones developing on the trailing fronts of preexisting “parent” cyclones. The evolution characteristics of composite west and east Atlantic cyclones have been compared. The ratio of their upper- to lower-level forcing indicates that type B cyclones are predominant in both the west and east Atlantic, with strong upper- and lower-level features. Among the remaining cyclones, there is a higher proportion of type C cyclones in the east Atlantic, whereas types A and C are equally frequent in the west Atlantic.