945 resultados para Point density analysis
Resumo:
This paper analyzes the relationship between spatial density of economic activity and interregional differences in the productivity of industrial labour in Spain during the period 1860-1999. In the spirit of Ciccone and Hall (1996) and Ciccone (2002), we analyze the evolution of this relationship over the long term in Spain. Using data on the period 1860-1999 we show the existence of an agglomeration effect linking the density of economic activity with labour productivity in the industry. This effect was present since the beginning of the industrialization process in the middle of the 19th century but has been decreasing over time. The estimated elasticity of labour productivity with respect to employment density was close to 8% in the subperiod 1860-1900, reduces to a value of around 7% in the subperiod 1914-1930, to 4% in the subperiod 1965-1979 and becomes insignificant in the final subperiod 1985-1999. At the end of the period analyzed there is no evidence of the existence of net agglomeration effects in the industry. This result could be explained by an important increase in the congestion effects in large industrial metropolitan areas that would have compensated the centripetal or agglomeration forces at work. Furthermore, this result is also consistent with the evidence of a dispersion of industrial activity in Spain during the last decades.
Resumo:
Summary Detection, analysis and monitoring of slope movements by high-resolution digital elevation modelsSlope movements, such as rockfalls, rockslides, shallow landslides or debris flows, are frequent in many mountainous areas. These natural hazards endanger the inhabitants and infrastructures making it necessary to assess the hazard and risk caused by these phenomena. This PhD thesis explores various approaches using digital elevation models (DEMs) - and particularly high-resolution DEMs created by aerial or terrestrial laser scanning (TLS) - that contribute to the assessment of slope movement hazard at regional and local scales.The regional detection of areas prone to rockfalls and large rockslides uses different morphologic criteria or geometric instability factors derived from DEMs, i.e. the steepness of the slope, the presence of discontinuities, which enable a sliding mechanism, and the denudation potential. The combination of these factors leads to a map of susceptibility to rockfall initiation that is in good agreement with field studies as shown with the example of the Little Mill Campground area (Utah, USA). Another case study in the Illgraben catchment in the Swiss Alps highlighted the link between areas with a high denudation potential and actual rockfall areas.Techniques for a detailed analysis and characterization of slope movements based on high-resolution DEMs have been developed for specific, localized sites, i.e. ancient slide scars, present active instabilities or potential slope instabilities. The analysis of the site's characteristics mainly focuses on rock slopes and includes structural analyses (orientation of discontinuities); estimation of spacing, persistence and roughness of discontinuities; failure mechanisms based on the structural setting; and volume calculations. For the volume estimation a new 3D approach was tested to reconstruct the topography before a landslide or to construct the basal failure surface of an active or potential instability. The rockslides at Åknes, Tafjord and Rundefjellet in western Norway were principally used as study sites to develop and test the different techniques.The monitoring of slope instabilities investigated in this PhD thesis is essentially based on multitemporal (or sequential) high-resolution DEMs, in particular sequential point clouds acquired by TLS. The changes in the topography due to slope movements can be detected and quantified by sequential TLS datasets, notably by shortest distance comparisons revealing the 3D slope movements over the entire region of interest. A detailed analysis of rock slope movements is based on the affine transformation between an initial and a final state of the rock mass and its decomposition into translational and rotational movements. Monitoring using TLS was very successful on the fast-moving Eiger rockslide in the Swiss Alps, but also on the active rockslides of Åknes and Nordnesfjellet (northern Norway). One of the main achievements on the Eiger and Aknes rockslides is to combine the site's morphology and structural setting with the measured slope movements to produce coherent instability models. Both case studies also highlighted a strong control of the structures in the rock mass on the sliding directions. TLS was also used to monitor slope movements in soils, such as landslides in sensitive clays in Québec (Canada), shallow landslides on river banks (Sorge River, Switzerland) and a debris flow channel (Illgraben).The PhD thesis underlines the broad uses of high-resolution DEMs and especially of TLS in the detection, analysis and monitoring of slope movements. Future studies should explore in more depth the different techniques and approaches developed and used in this PhD, improve them and better integrate the findings in current hazard assessment practices and in slope stability models.Résumé Détection, analyse et surveillance de mouvements de versant à l'aide de modèles numériques de terrain de haute résolutionDes mouvements de versant, tels que des chutes de blocs, glissements de terrain ou laves torrentielles, sont fréquents dans des régions montagneuses et mettent en danger les habitants et les infrastructures ce qui rend nécessaire d'évaluer le danger et le risque causé par ces phénomènes naturels. Ce travail de thèse explore diverses approches qui utilisent des modèles numériques de terrain (MNT) et surtout des MNT de haute résolution créés par scanner laser terrestre (SLT) ou aérien - et qui contribuent à l'évaluation du danger de mouvements de versant à l'échelle régionale et locale.La détection régionale de zones propices aux chutes de blocs ou aux éboulements utilise plusieurs critères morphologiques dérivés d'un MNT, tels que la pente, la présence de discontinuités qui permettent un mécanisme de glissement ou le potentiel de dénudation. La combinaison de ces facteurs d'instabilité mène vers une carte de susceptibilité aux chutes de blocs qui est en accord avec des travaux de terrain comme démontré avec l'exemple du Little Mill Campground (Utah, États-Unis). Un autre cas d'étude - l'Illgraben dans les Alpes valaisannes - a mis en évidence le lien entre les zones à fort potentiel de dénudation et les sources effectives de chutes de blocs et d'éboulements.Des techniques pour l'analyse et la caractérisation détaillée de mouvements de versant basées sur des MNT de haute résolution ont été développées pour des sites spécifiques et localisés, comme par exemple des cicatrices d'anciens éboulements et des instabilités actives ou potentielles. Cette analyse se focalise principalement sur des pentes rocheuses et comprend l'analyse structurale (orientation des discontinuités); l'estimation de l'espacement, la persistance et la rugosité des discontinuités; l'établissement des mécanismes de rupture; et le calcul de volumes. Pour cela une nouvelle approche a été testée en rétablissant la topographie antérieure au glissement ou en construisant la surface de rupture d'instabilités actuelles ou potentielles. Les glissements rocheux d'Åknes, Tafjord et Rundefjellet en Norvège ont été surtout utilisés comme cas d'étude pour développer et tester les diverses approches. La surveillance d'instabilités de versant effectuée dans cette thèse de doctorat est essentiellement basée sur des MNT de haute résolution multi-temporels (ou séquentiels), en particulier des nuages de points séquentiels acquis par SLT. Les changements topographiques dus aux mouvements de versant peuvent être détectés et quantifiés sur l'ensemble d'un glissement, notamment par comparaisons des distances les plus courtes entre deux nuages de points. L'analyse détaillée des mouvements est basée sur la transformation affine entre la position initiale et finale d'un bloc et sa décomposition en mouvements translationnels et rotationnels. La surveillance par SLT a démontré son potentiel avec l'effondrement d'un pan de l'Eiger dans les Alpes suisses, mais aussi aux glissements rocheux d'Aknes et Nordnesfjellet en Norvège. Une des principales avancées à l'Eiger et à Aknes est la création de modèles d'instabilité cohérents en combinant la morphologie et l'agencement structural des sites avec les mesures de déplacements. Ces deux cas d'étude ont aussi démontré le fort contrôle des structures existantes dans le massif rocheux sur les directions de glissement. Le SLT a également été utilisé pour surveiller des glissements dans des terrains meubles comme dans les argiles sensibles au Québec (Canada), sur les berges de la rivière Sorge en Suisse et dans le chenal à laves torrentielles de l'Illgraben.Cette thèse de doctorat souligne le vaste champ d'applications des MNT de haute résolution et particulièrement du SLT dans la détection, l'analyse et la surveillance des mouvements de versant. Des études futures devraient explorer plus en profondeur les différentes techniques et approches développées, les améliorer et mieux les intégrer dans des pratiques actuelles d'analyse de danger et surtout dans la modélisation de stabilité des versants.
Resumo:
Introduction: Survival of children born prematurely or with very low birth weight has increased dramatically, but the long term developmental outcome remains unknown. Many children have deficits in cognitive capacities, in particular involving executive domains and those disabilities are likely to involve a central nervous system deficit. To understand their neurostructural origin, we use DTI. Structurally segregated and functionally regions of the cerebral cortex are interconnected by a dense network of axonal pathways. We noninvasively map these pathways across cortical hemispheres and construct normalized structural connection matrices derived from DTI MR tractography. Group comparisons of brain connectivity reveal significant changes in fiber density in case of children with poor intrauterine grown and extremely premature children (gestational age<28 weeks at birth) compared to control subjects. This changes suggest a link between cortico-axonal pathways and the central nervous system deficit. Methods: Sixty premature born infants (5-6 years old) were scanned on clinical 3T scanner (Magnetom Trio, Siemens Medical Solutions, Erlangen, Germany) at two hospitals (HUG, Geneva and CHUV, Lausanne). For each subject, T1-weighted MPRAGE images (TR/TE=2500/2.91,TI=1100, resolution=1x1x1mm, matrix=256x154) and DTI images (30 directions, TR/TE=10200/107, in-plane resolution=1.8x1.8x2mm, 64 axial, matrix=112x112) were acquired. Parent(s) provided written consent on prior ethical board approval. The extraction of the Whole Brain Structural Connectivity Matrix was performed following (Cammoun, 2009 and Hagmann, 2008). The MPARGE images were registered using an affine registration to the non-weighted-DTI and WM-GM segmentation performed on it. In order to have equal anatomical localization among subjects, 66 cortical regions with anatomical landmarks were created using the curvature information, i.e. sulcus and gyrus (Cammoun et al, 2007; Fischl et al, 2004; Desikan et al, 2006) with freesurfer software (http://surfer.nmr.mgh.harvard.edu/). Tractography was performed in WM using an algorithm especially designed for DTI/DSI data (Hagmann et al., 2007) and both information were then combined in a matrix. Each row and column of the matrix corresponds to a particular ROI. Each cell of index (i,j) represents the fiber density of the bundle connecting the ROIs i and j. Subdividing each cortical region, we obtained 4 Connectivity Matrices of different resolution (33, 66, 125 and 250 ROI/hemisphere) for each subject . Subjects were sorted in 3 different groups, namely (1) control, (2) Intrauterine Growth Restriction (IUGR), (3) Extreme Prematurity (EP), depending on their gestational age, weight and percentile-weight score at birth. Group-to-group comparisons were performed between groups (1)-(2) and (1)-(3). The mean age at examination of the three groups were similar. Results: Quantitative analysis were performed between groups to determine fibers density differences. For each group, a mean connectivity matrix with 33ROI/hemisphere resolution was computed. On the other hand, for all matrix resolutions (33,66,125,250 ROI/hemisphere), the number of bundles were computed and averaged. As seen in figure 1, EP and IUGR subjects present an overall reduction of fibers density in both interhemispherical and intrahemispherical connections. This is given quantitatively in table 1. IUGR subjects presents a higher percentage of missing fiber bundles than EP when compared to control subjects (~16% against 11%). When comparing both groups to control subjects, for the EP subjects, the occipito-parietal regions seem less interhemispherically connected whilst the intrahemispherical networks present lack of fiber density in the lymbic system. Children born with IUGR, have similar reductions in interhemispherical connections than the EP. However, the cuneus and precuneus connections with the precentral and paracentral lobe are even lower than in the case of the EP. For the intrahemispherical connections the IUGR group preset a loss of fiber density between the deep gray matter structures (striatum) and the frontal and middlefrontal poles, connections typically involved in the control of executive functions. For the qualitative analysis, a t-test comparing number of bundles (p-value<0.05) gave some preliminary significant results (figure 2). Again, even if both IUGR and EP appear to have significantly less connections comparing to the control subjects, the IUGR cohort seems to present a higher lack of fiber density specially relying the cuneus, precuneus and parietal areas. In terms of fiber density, preliminary Wilcoxon tests seem to validate the hypothesis set by the previous analysis. Conclusions: The goal of this study was to determine the effect of extreme prematurity and poor intrauterine growth on neurostructural development at the age of 6 years-old. This data indicates that differences in connectivity may well be the basis for the neurostructural and neuropsychological deficit described in these populations in the absence of overt brain lesions (Inder TE, 2005; Borradori-Tolsa, 2004; Dubois, 2008). Indeed, we suggest that IUGR and prematurity leads to alteration of connectivity between brain structures, especially in occipito-parietal and frontal lobes for EP and frontal and middletemporal poles for IUGR. Overall, IUGR children have a higher loss of connectivity in the overall connectivity matrix than EP children. In both cases, the localized alteration of connectivity suggests a direct link between cortico-axonal pathways and the central nervous system deficit. Our next step is to link these connectivity alterations to the performance in executive function tests.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
The prediction of rockfall travel distance below a rock cliff is an indispensable activity in rockfall susceptibility, hazard and risk assessment. Although the size of the detached rock mass may differ considerably at each specific rock cliff, small rockfall (<100 m3) is the most frequent process. Empirical models may provide us with suitable information for predicting the travel distance of small rockfalls over an extensive area at a medium scale (1:100 000¿1:25 000). "Solà d'Andorra la Vella" is a rocky slope located close to the town of Andorra la Vella, where the government has been documenting rockfalls since 1999. This documentation consists in mapping the release point and the individual fallen blocks immediately after the event. The documentation of historical rockfalls by morphological analysis, eye-witness accounts and historical images serve to increase available information. In total, data from twenty small rockfalls have been gathered which reveal an amount of a hundred individual fallen rock blocks. The data acquired has been used to check the reliability of the main empirical models widely adopted (reach and shadow angle models) and to analyse the influence of parameters which affecting the travel distance (rockfall size, height of fall along the rock cliff and volume of the individual fallen rock block). For predicting travel distances in maps with medium scales, a method has been proposed based on the "reach probability" concept. The accuracy of results has been tested from the line entailing the farthest fallen boulders which represents the maximum travel distance of past rockfalls. The paper concludes with a discussion of the application of both empirical models to other study areas.
Resumo:
This article extends existing discussion in literature on probabilistic inference and decision making with respect to continuous hypotheses that are prevalent in forensic toxicology. As a main aim, this research investigates the properties of a widely followed approach for quantifying the level of toxic substances in blood samples, and to compare this procedure with a Bayesian probabilistic approach. As an example, attention is confined to the presence of toxic substances, such as THC, in blood from car drivers. In this context, the interpretation of results from laboratory analyses needs to take into account legal requirements for establishing the 'presence' of target substances in blood. In a first part, the performance of the proposed Bayesian model for the estimation of an unknown parameter (here, the amount of a toxic substance) is illustrated and compared with the currently used method. The model is then used in a second part to approach-in a rational way-the decision component of the problem, that is judicial questions of the kind 'Is the quantity of THC measured in the blood over the legal threshold of 1.5 μg/l?'. This is pointed out through a practical example.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
INTRODUCTION: The trabecular bone score (TBS) is a new parameter that is determined from grey level analysis of DXA images. It relies on the mean thickness and volume fraction of trabecular bone microarchitecture. This was a preliminary case-control study to evaluate the potential diagnostic value of TBS, both alone and combined with bone mineral density (BMDa), in the assessment of vertebral fracture. METHODS: Out of a subject pool of 441 Caucasian, postmenopausal women between the ages of 50 and 80 years, we identified 42 women with osteoporosis-related vertebral fractures, and compared them with 126 age-matched women without any fractures (1 case: 3 controls). Primary outcomes were BMDa and TBS. Inter-group comparisons were undertaken using Student's t-tests and Wilcoxon signed ranks tests for parametric and non-parametric data, respectively. Odds ratios for vertebral fracture were calculated for each incremental one standard deviation decrease in BMDa and TBS, and areas under the receiver operating curve (AUC) calculated and sensitivity analysis were conducted to compare BMDa alone, TBS alone, and the combination of BMDa and TBS. Subgroup analyses were performed specifically for women with osteopenia, and for women with T-score-defined osteoporosis. RESULTS: Across all subjects (n=42, 126) weight and body mass index were greater and BMDa and TBS both less in women with fractures. The odds of vertebral fracture were 3.20 (95% CI, 2.01-5.08) for each incremental decrease in TBS, 1.95 (1.34-2.84) for BMDa, and 3.62 (2.32-5.65) for BMDa + TBS combined. The AUC was greater for TBS than for BMDa (0.746 vs. 0.662, p=0.011). At iso-specificity (61.9%) or iso-sensitivity (61.9%) for both BMDa and TBS, TBS + BMDa sensitivity or specificity was 19.1% or 16.7% greater than for either BMDa or TBS alone. Among subjects with osteoporosis (n=11, 40) both BMDa (p=0.0008) and TBS (p=0.0001) were lower in subjects with fractures, and both OR and AUC (p=0.013) for BMDa + TBS were greater than for BMDa alone (OR=4.04 [2.35-6.92] vs. 2.43 [1.49-3.95]; AUC=0.835 [0.755-0.897] vs. 0.718 [0.627-0.797], p=0.013). Among subjects with osteopenia, TBS was lower in women with fractures (p=0.0296), but BMDa was not (p=0.75). Similarly, the OR for TBS was statistically greater than 1.00 (2.82, 1.27-6.26), but not for BMDa (1.12, 0.56-2.22), as was the AUC (p=0.035), but there was no statistical difference in specificity (p=0.357) or sensitivity (p=0.678). CONCLUSIONS: The trabecular bone score warrants further study as to whether it has any clinical application in osteoporosis detection and the evaluation of fracture risk.
Resumo:
High-density lipoproteins (HDLs) exert a series of potentially beneficial effects on many cell types including anti-atherogenic actions on the endothelium and macrophage foam cells. HDLs may also exert anti-diabetogenic functions on the beta cells of the endocrine pancreas, notably by potently inhibiting stress-induced cell death and enhancing glucose-stimulated insulin secretion. HDLs have also been found to stimulate insulin-dependent and insulin-independent glucose uptake into skeletal muscle, adipose tissue, and liver. These experimental findings and the inverse association of HDL-cholesterol levels with the risk of diabetes development have generated the notion that appropriate HDL levels and functionality must be maintained in humans to diminish the risks of developing diabetes. In this article, we review our knowledge on the beneficial effects of HDLs in pancreatic beta cells and how these effects are mediated. We discuss the capacity of HDLs to modulate endoplasmic reticulum stress and how this affects beta-cell survival. We also point out the gaps in our understanding on the signalling properties of HDLs in beta cells. Hopefully, this review will foster the interest of scientists in working on beta cells and diabetes to better define the cellular pathways activated by HDLs in beta cells. Such knowledge will be of importance to design therapeutic tools to preserve the proper functioning of the insulin-secreting cells in our body.
Resumo:
Background: Bone health is a concern when treating early stage breast cancer patients with adjuvant aromatase inhibitors. Early detection of patients (pts) at risk of osteoporosis and fractures may be helpful for starting preventive therapies and selecting the most appropriate endocrine therapy schedule. We present statistical models describing the evolution of lumbar and hip bone mineral density (BMD) in pts treated with tamoxifen (T), letrozole (L) and sequences of T and L. Methods: Available dual-energy x-ray absorptiometry exams (DXA) of pts treated in trial BIG 1-98 were retrospectively collected from Swiss centers. Treatment arms: A) T for 5 years, B) L for 5 years, C) 2 years of T followed by 3 years of L and, D) 2 years of L followed by 3 years of T. Pts without DXA were used as a control for detecting selection biases. Patients randomized to arm A were subsequently allowed an unplanned switch from T to L. Allowing for variations between DXA machines and centres, two repeated measures models, using a covariance structure that allow for different times between DXA, were used to estimate changes in hip and lumbar BMD (g/cm2) from trial randomization. Prospectively defined covariates, considered as fixed effects in the multivariable models in an intention to treat analysis, at the time of trial randomization were: age, height, weight, hysterectomy, race, known osteoporosis, tobacco use, prior bone fracture, prior hormone replacement therapy (HRT), bisphosphonate use and previous neo-/adjuvant chemotherapy (ChT). Similarly, the T-scores for lumbar and hip BMD measurements were modeled using a per-protocol approach (allowing for treatment switch in arm A), specifically studying the effect of each therapy upon T-score percentage. Results: A total of 247 out of 546 pts had between 1 and 5 DXA; a total of 576 DXA were collected. Number of DXA measurements per arm were; arm A 133, B 137, C 141 and D 135. The median follow-up time was 5.8 years. Significant factors positively correlated with lumbar and hip BMD in the multivariate analysis were weight, previous HRT use, neo-/adjuvant ChT, hysterectomy and height. Significant negatively correlated factors in the models were osteoporosis, treatment arm (B/C/D vs. A), time since endocrine therapy start, age and smoking (current vs. never).Modeling the T-score percentage, differences from T to L were -4.199% (p = 0.036) and -4.907% (p = 0.025) for the hip and lumbar measurements respectively, before any treatment switch occurred. Conclusions: Our statistical models describe the lumbar and hip BMD evolution for pts treated with L and/or T. The results of both localisations confirm that, contrary to expectation, the sequential schedules do not seem less detrimental for the BMD than L monotherapy. The estimated difference in BMD T-score percent is at least 4% from T to L.
Resumo:
Gene set enrichment (GSE) analysis is a popular framework for condensing information from gene expression profiles into a pathway or signature summary. The strengths of this approach over single gene analysis include noise and dimension reduction, as well as greater biological interpretability. As molecular profiling experiments move beyond simple case-control studies, robust and flexible GSE methodologies are needed that can model pathway activity within highly heterogeneous data sets. To address this challenge, we introduce Gene Set Variation Analysis (GSVA), a GSE method that estimates variation of pathway activity over a sample population in an unsupervised manner. We demonstrate the robustness of GSVA in a comparison with current state of the art sample-wise enrichment methods. Further, we provide examples of its utility in differential pathway activity and survival analysis. Lastly, we show how GSVA works analogously with data from both microarray and RNA-seq experiments. GSVA provides increased power to detect subtle pathway activity changes over a sample population in comparison to corresponding methods. While GSE methods are generally regarded as end points of a bioinformatic analysis, GSVA constitutes a starting point to build pathway-centric models of biology. Moreover, GSVA contributes to the current need of GSE methods for RNA-seq data. GSVA is an open source software package for R which forms part of the Bioconductor project and can be downloaded at http://www.bioconductor.org.
Resumo:
In the past, culvert pipes were made only of corrugated metal or reinforced concrete. In recent years, several manufacturers have made pipe of lightweight plastic - for example, high density polyethylene (HDPE) - which is considered to be viscoelastic in its structural behavior. It appears that there are several highway applications in which HDPE pipe would be an economically favorable alternative. However, the newness of plastic pipe requires the evaluation of its performance, integrity, and durability; A review of the Iowa Department of Transportation Standard Specifications for Highway and Bridge Construction reveals limited information on the use of plastic pipe for state projects. The objective of this study was to review and evaluate the use of HDPE pipe in roadway applications. Structural performance, soil-structure interaction, and the sensitivity of the pipe to installation was investigated. Comprehensive computerized literature searches were undertaken to define the state-of-the-art in the design and use of HDPE pipe in highway applications. A questionnaire was developed and sent to all Iowa county engineers to learn of their use of HDPE pipe. Responses indicated that the majority of county engineers were aware of the product but were not confident in its ability to perform as well as conventional materials. Counties currently using HDPE pipe in general only use it in driveway crossings. Originally, we intended to survey states as to their usage of HDPE pipe. However, a few weeks after initiation of the project, it was learned that the Tennessee DOT was in the process of making a similar survey of state DOT's. Results of the Tennessee survey of states have been obtained and included in this report. In an effort to develop more confidence in the pipe's performance parameters, this research included laboratory tests to determine the ring and flexural stiffness of HDPE pipe provided by various manufacturers. Parallel plate tests verified all specimens were in compliance with ASTM specifications. Flexural testing revealed that pipe profile had a significant effect on the longitudinal stiffness and that strength could not be accurately predicted on the basis of diameter alone. Realizing that the soil around a buried HDPE pipe contributes to the pipe stiffness, the research team completed a limited series of tests on buried 3 ft-diameter HDPE pipe. The tests simulated the effects of truck wheel loads above the pipe and were conducted with two feet of cover. These tests indicated that the type and quality of backfill significantly influences the performance of HDPE pipe. The tests revealed that the soil envelope does significantly affect the performance of HDPE pipe in situ, and after a certain point, no additional strength is realized by increasing the quality of the backfill.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
The objective of this work was to evaluate the use of basic density and pulp yield correlations with some chemical parameters, in order to differentiate an homogeneous eucalyptus tree population, in terms of its potential for pulp production or some other technological applications. Basic density and kraft pulp yield were determined for 120 Eucalyptus globulus trees, and the values were plotted as frequency distributions. Homogenized samples from the first and fourth density quartiles and first and fourth yield quartiles were submitted to total phenols, total sugars and methoxyl group analysis. Syringyl/guaiacyl (S/G) and syringaldehyde/vanillin (S/V) ratios were determined on the kraft lignins from wood of the same quartiles. The results show the similarity between samples from high density and low yield quartiles, both with lower S/G (3.88-4.12) and S/V (3.99-4.09) ratios and higher total phenols (13.3-14.3 g gallic acid kg-1 ). Woods from the high yield quartile are statistically distinguished from all the others because of their higher S/G (5.15) and S/V (4.98) ratios and lower total phenols (8.7 g gallic acid kg-1 ). Methoxyl group and total sugars parameters are more adequate to distinguish wood samples with lower density.
Resumo:
This study evaluated the use of electromagnetic gauges to determine the adjusted densities of HMA pavements. Field measurements were taken with two electromagnetic gauges, the Pavement Quality Indicator (PQI) 301 and the Pavetracker Plus 2701B. Seven projects were included in the study with 3 to 5 consecutive paving days. For each day/lot 20 randomly selected locations were tested along with seven core locations. The analysis of PaveTracker and PQI density consisted of determining which factors are statistically significant, and core density residuals and a regression analysis of core as a function of PaveTracker and PQI readings. The following key conclusions can be stated: 1. Core density, traffic and binder content were all found to be significant for both electromagnetic gauges studied, 2. Core density residuals are normally distributed and centered at zero for both electromagnetic gauges, 3. For PaveTracker readings, statistically one third of the lots do not have an intercept that is zero and two thirds of the lots do not rule out a scaler correction factor of zero, 4. For PQI readings, statistically the 95% confidence interval rules out the intercept being zero for all seven projects and six of the seven projects do not rule out the scaler correction factor being zero, 5. The PQI 301 gauge should not be used for quality control or quality assurance, and 6. The Pavetracker 2701B gauge can be used for quality control but not quality assurance. This study has found that with the limited sample size, the adjusted density equations for both electromagnetic gauges were determined to be inadequate. The PaveTracker Plus 2701B was determined to be better than the PQI 301. The PaveTracker 2701B could still be applicable for quality assurance if the number of core locations per day is reduced and supplemented with additional PaveTracker 2701B readings. Further research should be done to determine the minimum number of core locations to calibrate the gauges each day/lot and the number of additional PaveTracker 2701B readings required.