991 resultados para Triglycérides--Métabolisme--Aspect génétique
Resumo:
Species distribution modeling has relevant implications for the studies of biodiversity, decision making about conservation and knowledge about ecological requirements of the species. The aim of this study was to evaluate if the use of forest inventories can improve the estimation of occurrence probability, identify the limits of the potential distribution and habitat preference of a group of timber tree species. The environmental predictor variables were: elevation, slope, aspect, normalized difference vegetation index (NDVI) and height above the nearest drainage (HAND). To estimate the distribution of species we used the maximum entropy method (Maxent). In comparison with a random distribution, using topographic variables and vegetation index as features, the Maxent method predicted with an average accuracy of 86% the geographical distribution of studied species. The altitude and NDVI were the most important variables. There were limitations to the interpolation of the models for non-sampled locations and that are outside of the elevation gradient associated with the occurrence data in approximately 7% of the basin area. Ceiba pentandra (samaúma), Castilla ulei (caucho) and Hura crepitans (assacu) is more likely to occur in nearby water course areas. Clarisia racemosa (guariúba), Amburana acreana (cerejeira), Aspidosperma macrocarpon (pereiro), Apuleia leiocarpa (cumaru cetim), Aspidosperma parvifolium (amarelão) and Astronium lecointei (aroeira) can also occur in upland forest and well drained soils. This modeling approach has potential for application on other tropical species still less studied, especially those that are under pressure from logging.
Resumo:
Identification of the tensile constitutive behaviour of Fibre Reinforced Concrete (FRC) represents an important aspect of the design of structural elements using this material. Although an important step has been made with the introduction of guidance for the design with regular FRC in the recently published fib Model Code 2010, a better understanding of the behaviour of this material is still necessary, mainly for that with self-compacting properties. This work presents an experimental investigation employing Steel Fibre Self-Compacting Concrete (SFRSCC) to cast thin structural elements. A new test method is proposed for assessing the post-cracking behaviour and the results obtained with the proposed test method are compared with the ones resulted from the standard three-point bending tests (3PBT). Specimens extracted from a sandwich panel consisting of SFRSCC layers are also tested. The mechanical properties of SFRSCC are correlated to the fibre distribution by analysing the results obtained with the different tests. Finally, the stress-crack width constitutive law proposed by the fib Model Code 2010 is analysed in light of the experimental results.
Resumo:
Dissertação de mestrado em Ciências – Formação Contínua de Professores (área de especialização em Biologia e Geologia)
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Dissertação de mestrado integrado em Engenharia Mecânica
Resumo:
Dissertação de mestrado em Engenharia Mecatrónica
Resumo:
Tese de Doutoramento em Ciências da Educação (Especialidade em Desenvolvimento Curricular)
Resumo:
Tese de Doutoramento em Engenharia de Materiais.
Resumo:
Tese de Doutoramento em Estudos da Criança (Especialidade em Educação Musical)
Resumo:
Tese de Doutoramento em Ciências da Educação
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Dissertação de mestrado integrado em Engenharia Civil (área de especialização em Estruturas e Geotecnia)
Resumo:
Dissertação de mestrado em Arqueologia