948 resultados para Hierarchical partitioning analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improvements in the analysis of microarray images are critical for accurately quantifying gene expression levels. The acquisition of accurate spot intensities directly influences the results and interpretation of statistical analyses. This dissertation discusses the implementation of a novel approach to the analysis of cDNA microarray images. We use a stellar photometric model, the Moffat function, to quantify microarray spots from nylon microarray images. The inherent flexibility of the Moffat shape model makes it ideal for quantifying microarray spots. We apply our novel approach to a Wilms' tumor microarray study and compare our results with a fixed-circle segmentation approach for spot quantification. Our results suggest that different spot feature extraction methods can have an impact on the ability of statistical methods to identify differentially expressed genes. We also used the Moffat function to simulate a series of microarray images under various experimental conditions. These simulations were used to validate the performance of various statistical methods for identifying differentially expressed genes. Our simulation results indicate that tests taking into account the dependency between mean spot intensity and variance estimation, such as the smoothened t-test, can better identify differentially expressed genes, especially when the number of replicates and mean fold change are low. The analysis of the simulations also showed that overall, a rank sum test (Mann-Whitney) performed well at identifying differentially expressed genes. Previous work has suggested the strengths of nonparametric approaches for identifying differentially expressed genes. We also show that multivariate approaches, such as hierarchical and k-means cluster analysis along with principal components analysis, are only effective at classifying samples when replicate numbers and mean fold change are high. Finally, we show how our stellar shape model approach can be extended to the analysis of 2D-gel images by adapting the Moffat function to take into account the elliptical nature of spots in such images. Our results indicate that stellar shape models offer a previously unexplored approach for the quantification of 2D-gel spots. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide the first exploration of thallium (Tl) abundances and stable isotope compositions as potential tracers during arc lava genesis. We present a case study of lavas from the Central Island Province (CIP) of the Mariana arc, supplemented by representative sedimentary and altered oceanic crust (AOC) inputs from ODP Leg 129 Hole 801 outboard of the Mariana trench. Given the large Tl concentration contrast between the mantle and subduction inputs coupled with previously published distinctive Tl isotope signatures of sediment and AOC, the Tl isotope system has great potential to distinguish different inputs to arc lavas. Furthermore, CIP lavas have well-established inter island variability, providing excellent context for the examination of Tl as a new stable isotope tracer. In contrast to previous work (Nielsen et al., 2006b), we do not observe Tl enrichment or light epsilon 205Tl (where epsilon 205Tl is the deviation in parts per 10,000 of a sample 205Tl/203Tl ratio compared to NIST SRM 997 Tl standard) in the Jurassic-aged altered mafic ocean crust subducting outboard of the Marianas (epsilon 205Tl = - 4.4 to 0). The lack of a distinctive epsilon 205Tl signature may be related to secular changes in ocean chemistry. Sediments representative of the major lithologies from ODP Hole Leg 129 801 have 1-2 orders of magnitude of Tl enrichment compared to the CIP lavas, but do not record heavy signatures (epsilon 205Tl = - 3.0 to + 0.4), as previously found in similar sediment types (epsilon 205Tl > + 2.5; Rehkämper et al., 2004). We find a restricted range of epsilon 205Tl = - 1.8 to - 0.4 in CIP lavas, which overlaps with MORB. One lava from Guguan falls outside this range with epsilon 205Tl = + 1.2. Coupled Cs, Tl and Pb systematics of Guguan lavas suggests that this heavy Tl isotope composition may be due to preferential degassing of isotopically light Tl. In general, the low Tl concentrations and limited isotopic range in the CIP lavas is likely due to the unexpectedly narrow range of epsilon 205Tl found in Mariana subduction inputs, coupled with volcaniclastic, rather than pelagic sediment as the dominant source of Tl. Much work remains to better understand the controls on Tl processing through a subduction zone. For example, Tl could be retained in residual phengite, offering the potential exploration of Cs/Tl ratios as a slab thermometer. However, data for Tl partitioning in phengite (and other micas) is required before developing this application further. Establishing a database of Tl concentrations and stable isotopes in subduction zone lavas with different thermal parameters and sedimentary inputs is required for the future use of Tl as a subduction zone tracer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing accurate maps of coral reefs where the spatial scale and labels of the mapped features correspond to map units appropriate for examining biological and geomorphic structures and processes is a major challenge for remote sensing. The objective of this work is to assess the accuracy and relevance of the process used to derive geomorphic zone and benthic community zone maps for three western Pacific coral reefs produced from multi-scale, object-based image analysis (OBIA) of high-spatial-resolution multi-spectral images, guided by field survey data. Three Quickbird-2 multi-spectral data sets from reefs in Australia, Palau and Fiji and georeferenced field photographs were used in a multi-scale segmentation and object-based image classification to map geomorphic zones and benthic community zones. A per-pixel approach was also tested for mapping benthic community zones. Validation of the maps and comparison to past approaches indicated the multi-scale OBIA process enabled field data, operator field experience and a conceptual hierarchical model of the coral reef environment to be linked to provide output maps at geomorphic zone and benthic community scales on coral reefs. The OBIA mapping accuracies were comparable with previously published work using other methods; however, the classes mapped were matched to a predetermined set of features on the reef.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Idea Management Systems are an implementation of open innovation notion in the Web environment with the use of crowdsourcing techniques. In this area, one of the popular methods for coping with large amounts of data is duplicate de- tection. With our research, we answer a question if there is room to introduce more relationship types and in what degree would this change affect the amount of idea metadata and its diversity. Furthermore, based on hierarchical dependencies between idea relationships and relationship transitivity we propose a number of methods for dataset summarization. To evaluate our hypotheses we annotate idea datasets with new relationships using the contemporary methods of Idea Management Systems to detect idea similarity. Having datasets with relationship annotations at our disposal, we determine if idea features not related to idea topic (e.g. innovation size) have any relation to how annotators perceive types of idea similarity or dissimilarity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mobile ad hoc network MANET is a collection of wireless mobile nodes that can dynamically configure a network without a fixed infrastructure or centralized administration. This makes it ideal for emergency and rescue scenarios where information sharing is essential and should occur as soon as possible. This article discusses which of the routing strategies for mobile ad hoc networks: proactive, reactive and hierarchical, have a better performance in such scenarios. Using a real urban area being set for the emergency and rescue scenario, we calculate the density of nodes and the mobility model needed for validation. The NS2 simulator has been used in our study. We also show that the hierarchical routing strategies are beffer suited for this type of scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proton–sucrose symporter mediates the key transport step in the resource distribution system that allows many plants to function as multicellular organisms. In the results reported here, we identify sucrose as a signaling molecule in a previously undescribed signal-transduction pathway that regulates the symporter. Sucrose symporter activity declined in plasma membrane vesicles isolated from leaves fed exogenous sucrose via the xylem transpiration stream. Symporter activity dropped to 35–50% of water controls when the leaves were fed 100 mM sucrose and to 20–25% of controls with 250 mM sucrose. In contrast, alanine symporter and glucose transporter activities did not change in response to sucrose treatments. Decreased sucrose symporter activity was detectable after 8 h and reached a maximum by 24 h. Kinetic analysis of transport activity showed a decrease in Vmax. RNA gel blot analysis revealed a decrease in symporter message levels, suggesting a drop in transcriptional activity or a decrease in mRNA stability. Control experiments showed that these responses were not the result of changing osmotic conditions. Equal molar concentrations of hexoses did not elicit the response, and mannoheptulose, a hexokinase inhibitor, did not block the sucrose effect. These data are consistent with a sucrose-specific response pathway that is not mediated by hexokinase as the sugar sensor. Sucrose-dependent changes in the sucrose symporter were reversible, suggesting this sucrose-sensing pathway can modulate transport activity as a function of changing sucrose concentrations in the leaf. These results demonstrate the existence of a signaling pathway that can control assimilate partitioning at the level of phloem translocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although vertebrate cytoplasmic dynein can move to the minus ends of microtubules in vitro, its ability to translocate purified vesicles on microtubules depends on the presence of an accessory complex known as dynactin. We have cloned and characterized a novel gene, NIP100, which encodes the yeast homologue of the vertebrate dynactin complex protein p150glued. Like strains lacking the cytoplasmic dynein heavy chain Dyn1p or the centractin homologue Act5p, nip100Δ strains are viable but undergo a significant number of failed mitoses in which the mitotic spindle does not properly partition into the daughter cell. Analysis of spindle dynamics by time-lapse digital microscopy indicates that the precise role of Nip100p during anaphase is to promote the translocation of the partially elongated mitotic spindle through the bud neck. Consistent with the presence of a true dynactin complex in yeast, Nip100p exists in a stable complex with Act5p as well as Jnm1p, another protein required for proper spindle partitioning during anaphase. Moreover, genetic depletion experiments indicate that the binding of Nip100p to Act5p is dependent on the presence of Jnm1p. Finally, we find that a fusion of Nip100p to the green fluorescent protein localizes to the spindle poles throughout the cell cycle. Taken together, these results suggest that the yeast dynactin complex and cytoplasmic dynein together define a physiological pathway that is responsible for spindle translocation late in anaphase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. We test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, we use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fiber cell initiation in the epidermal cells of cotton (Gossypium hirsutum L.) ovules represents a unique example of trichome development in higher plants. Little is known about the molecular and metabolic mechanisms controlling this process. Here we report a comparative analysis of a fiberless seed (fls) mutant (lacking fibers) and a normal (FLS) mutant to better understand the initial cytological events in fiber development and to analyze the metabolic changes that are associated with the loss of a major sink for sucrose during cellulose biosynthesis in the mutant seeds. On the day of anthesis (0 DAA), the mutant ovular epidermal cells lacked the typical bud-like projections that are seen in FLS ovules and are required for commitment to the fiber development pathway. Cell-specific gene expression analyses at 0 DAA showed that sucrose synthase (SuSy) RNA and protein were undetectable in fls ovules but were in abundant, steady-state levels in initiating fiber cells of the FLS ovules. Tissue-level analyses of developing seeds 15 to 35 DAA revealed an altered temporal pattern of SuSy expression in the mutant relative to the normal genotype. Whether the altered programming of SuSy expression is the cause or the result of the mutation is unknown. The developing seeds of the fls mutant have also shown several correlated changes that represent altered carbon partitioning in seed coats and cotyledons as compared with the FLS genotype.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The enzyme 4-coumarate:coenzyme A ligase (4CL) is important in providing activated thioester substrates for phenylpropanoid natural product biosynthesis. We tested different hybrid poplar (Populus trichocarpa × Populus deltoides) tissues for the presence of 4CL isoforms by fast-protein liquid chromatography and detected a minimum of three 4CL isoforms. These isoforms shared similar hydroxycinnamic acid substrate-utilization profiles and were all inactive against sinapic acid, but instability of the native forms precluded extensive further analysis. 4CL cDNA clones were isolated and grouped into two major classes, the predicted amino acid sequences of which were 86% identical. Genomic Southern blots showed that the cDNA classes represent two poplar 4CL genes, and northern blots provided evidence for their differential expression. Recombinant enzymes corresponding to the two genes were expressed using a baculovirus system. The two recombinant proteins had substrate utilization profiles similar to each other and to the native poplar 4CL isoforms (4-coumaric acid > ferulic acid > caffeic acid; there was no conversion of sinapic acid), except that both had relatively high activity toward cinnamic acid. These results are discussed with respect to the role of 4CL in the partitioning of carbon in phenylpropanoid metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ethnopharmacological relevance and background: “Dictamnus” was a popular name for a group of medicinal herbaceous plant species of the Rutaceae and Lamiaceae, which since the 4th century have been used for gynaecological problems and other illnesses BCE and still appear in numerous ethnobotanical records. Aims: This research has as four overarching aims: Determining the historical evolution of medical preparations labelled “Dictamnus” and the different factors affecting this long-standing herbal tradition. Deciphering and differentiating those medicinal uses of “Dictamnus” which strictly correspond to Dictamnus (Rutaceae), from those of Origanum dictamnus and other Lamiaceae species. Quantitatively assessing the dependence from herbal books, and pharmaceutical tradition, of modern Dictamnus ethnobotanical records. Determining whether differences between Western and Eastern Europe exist with regards to the Dictamnus albus uses in ethnopharmacology and ethnomedicine. Methods: An exhaustive review of herbals, classical pharmacopoeias, ethnobotanical and ethnopharmacological literature was conducted. Systematic analysis of uses reported which were standardized according to International Classification of Diseases – 10 and multivariate analysis using factorial, hierarchical and neighbour joining methods was undertaken. Results and discussion: The popular concept “Dictamnus” includes Origanum dictamnus L., Ballota pseudodictamnus (L.) Benth. and B. acetabulosa (L.) Benth. (Lamiaceae), as well as Dictamnus albus L. and D. hispanicus Webb ex Willk. (Rutaceae), with 86 different types of uses. Between 1000 and 1700 CE numerous complex preparations with “Dictamnus” were used in the treatment of 35 different pathologies. On biogeographical grounds the widespread D. albus is a far more likely prototypical “Dictamnus” than the Cretan endemic Origanum dictamnus. However both form integral parts of the “Dictamnus” complex. Evidence exists for a sufficiently long and coherent tradition for D. albus and D. hispanicus, use to treat 47 different categories of diseases. Conclusions: This approach is a model for understanding the cultural history of plants and their role as resources for health care. “Dictamnus” shows how transmission of traditional knowledge about materia medica, over 26 centuries, represents remarkable levels of development and innovation. All this lead us to call attention to D. albus and D. hispanicus which are highly promising as potential herbal drug leads. The next steps of research should be to systematically analyse phytochemical, pharmacological and clinical evidence and to develop safety, pharmacology and toxicology profiles of the traditional preparations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Before puberty, there are only small sex differences in body shape and composition. During adolescence, sexual dimorphism in bone, lean, and fat mass increases, giving rise to the greater size and strength of the male skeleton. The question remains as to whether there are sex differences in bone strength or simply differences in anthropometric dimensions. To test this, we applied hip structural analysis (HSA) to derive strength and geometric indices of the femoral neck using bone densitometry scans (DXA) from a 6-year longitudinal study in Canadian children. Seventy boys and sixty-eight girls were assessed annually for 6 consecutive years. At the femoral neck, cross-sectional area (CSA, an index of axial strength), subperiosteal width (SPW), and section modulus (Z, an index of bending strength) were determined, and data were analyzed using a hierarchical (random effects) modeling approach. Biological age (BA) was defined as years from age at peak height velocity (PHV). When BA, stature, and total-body lean mass (TB lean) were controlled, boys had significantly higher Z than girls at all maturity levels (P < 0.05). Controlling height and TB lean for CSA demonstrated a significant independent sex by BA interaction effect (P < 0.05). That is, CSA was greater in boys before PHV but higher in girls after PHV The coefficients contributing the greatest proportion to the prediction of CSA, SPW, and Z were height and lean mass. Because the significant sex difference in Z was relatively small and close to the error of measurement, we questioned its biological significance. The sex difference in bending strength was therefore explained by anthropometric differences. In contrast to recent hypotheses, we conclude that the CSA-lean ratio does not imply altered mechanosensitivity in girls because bending dominates loading at the neck, and the Z-lean ratio remained similar between the sexes throughout adolescence. That is, despite the greater CSA in girls, the bone is strategically placed to resist bending; hence, the bones of girls and boys adapt to mechanical challenges in a similar way. (C) 2004 Elsevier Inc. All rights reserved.