1000 resultados para Data Archives
Resumo:
The research aimed to establish tyre-road noise models by using a Data Mining approach that allowed to build a predictive model and assess the importance of the tested input variables. The data modelling took into account three learning algorithms and three metrics to define the best predictive model. The variables tested included basic properties of pavement surfaces, macrotexture, megatexture, and uneven- ness and, for the first time, damping. Also, the importance of those variables was measured by using a sensitivity analysis procedure. Two types of models were set: one with basic variables and another with complex variables, such as megatexture and damping, all as a function of vehicles speed. More detailed models were additionally set by the speed level. As a result, several models with very good tyre-road noise predictive capacity were achieved. The most relevant variables were Speed, Temperature, Aggregate size, Mean Profile Depth, and Damping, which had the highest importance, even though influenced by speed. Megatexture and IRI had the lowest importance. The applicability of the models developed in this work is relevant for trucks tyre-noise prediction, represented by the AVON V4 test tyre, at the early stage of road pavements use. Therefore, the obtained models are highly useful for the design of pavements and for noise prediction by road authorities and contractors.
Resumo:
1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.
Resumo:
Trees are a great bank of data, named sometimes for this reason as the "silentwitnesses" of the past. Due to annual formation of rings, which is normally influenced directly by of climate parameters (generally changes in temperature and moisture or precipitation) and other environmental factors; these changes, occurred in the past, are"written" in the tree "archives" and can be "decoded" in order to interpret what hadhappened before, mainly applied for the past climate reconstruction.Using dendrochronological methods for obtaining samples of Pinus nigra fromthe Catalonian PrePirineous region, the cores of 15 trees with total time spine of about 100 - 250 years were analyzed for the tree ring width (TRW) patterns and had quite high correlation between them (0.71 ¿ 0.84), corresponding to a common behaviour for the environmental changes in their annual growth.After different trials with raw TRW data for standardization in order to take outthe negative exponential growth curve dependency, the best method of doubledetrending (power transformation and smoothing line of 32 years) were selected for obtaining the indexes for further analysis.Analyzing the cross-correlations between obtained tree ring width indexes andclimate data, significant correlations (p<0.05) were observed in some lags, as forexample, annual precipitation in lag -1 (previous year) had negative correlation with TRW growth in the Pallars region. Significant correlation coefficients are between 0.27- 0.51 (with positive or negative signs) for many cases; as for recent (but very short period) climate data of Seu d¿Urgell meteorological station, some significant correlation coefficients were observed, of the order of 0.9.These results confirm the hypothesis of using dendrochronological data as aclimate signal for further analysis, such as reconstruction of climate in the past orprediction in the future for the same locality.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Depuis quelques décennies, l'intérêt pour l’étude de la fonction d’évaluation s’est diversifié pour approfondir les principes théoriques (Jenkinson, 1922; Schellenberg, 1956; Samuels, 1992; Cook, 1992b; Eastwood, 1992b; Duranti, 1994; Couture, 1999), les stratégies (Booms, 1972; Samuels, 1986; Cook, 1992b; Eastwood, 1992b; Couture, 1999) et les dispositifs de leur processus d’application (Ham, 1984; Boles & Young, 1991; Cook, 2001a, 2001b). Mais, toutes ces contributions n'ont pas encore étudié la nature des résultats de l'évaluation que sont les archives définitives. Du point de vue patrimonial, l’absence d’études sur la définition et la mesure des qualités des archives définitives ne permet pas de vérifier si ces archives constituent un patrimoine documentaire significatif. Sur le plan administratif, l’état actuel de la pratique de l'évaluation n'a pas encore investi dans l'examen méticuleux de la nature de ses résultats. Au plan économique, le manque de méthodes et d'outils pour la mesure des qualités des archives ne permet pas de juger si ces archives valent l’investissement matériel, technique, financier et humain que leur conservation implique. Du point de vue professionnel, l’absence de méthodes et d’instruments d’évaluation des qualités des archives ne permet pas aux professionnels d’appuyer leur décision en matière d’évaluation des archives. Afin de remédier à cette situation, notre recherche vise à définir et à mesurer les qualités des archives définitives issues de l’évaluation. Pour ce faire, nous privilégions une méthodologie quantitative de nature descriptive, appropriée lorsqu’il s’agit d’étudier un sujet peu abordé (Fortin, 2006) tel que l'opérationnalisation des qualités des archives définitives. La stratégie de la recherche a comporté deux phases. La phase conceptuelle a permis d’identifier et de définir quatre qualités soit l’« Unicité », la « Preuve crédible », l’« Exploitabilité » et la « Représentativité ». La phase empirique consistait à vérifier la mesurabilité, à titre d’exemple, des variables découlant de deux des quatre dimensions de qualité dans le contexte des archives définitives, soit la « Preuve crédible » et l’« Exploitabilité ». Le mode de collecte des données réside dans l’application d’une grille de mesure conçue spécialement aux fins de cette étude. La réalisation de la collecte de données qui s’est déroulée à Bibliothèque et Archives nationales du Québec a permis l’opérationnalisation de 10 indicateurs spécifiques sur 13 appartenant à deux dimensions de qualité : la « Preuve crédible » et l’« Exploitabilité » des archives définitives. Ainsi, trois indicateurs spécifiques sur l’ensemble de 13 sont restés sans mesures à cause d’une certaine faiblesse liée à leur mesure que nous avons pu identifier et vérifier lors des pré-tests de la recherche. Ces trois indicateurs spécifiques sont le « Créateur » dans le cadre de la dimension de la « Preuve crédible », ainsi que la « Compréhensibilité » et la « Repérabilité » dans le cadre de la dimension de l’« Exploitabilité ». Les mesures obtenues pour les 10 indicateurs ont mené à l’identification des avantages et des points à améliorer concernant différentes variables liées au créateur, au service de conservation ou encore à l’état et à la nature du support. Cibler l’amélioration d’un produit ou d’un service représente, comme démontré dans la revue de la littérature, le but ultime d’une étude sur les dimensions de qualité. Trois types de contributions découlent de cette recherche. Au plan théorique, cette recherche offre un cadre conceptuel qui permet de définir le concept de qualité des archives définitives dans une perspective d’évaluation archivistique. Au plan méthodologique, elle propose une méthode de mesure des qualités applicables aux archives définitives ainsi que les instruments et le guide qui expliquent sa réalisation. Au plan professionnel, d'une part, elle permet d’évaluer les résultats de l’exercice de l’évaluation archivistique; d'autre part, elle offre aux professionnels non seulement une grille de mesure des qualités des archives définitives déjà testée, mais aussi le guide de son application.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
Melting of the Greenland Ice Sheet (GrIS) is accelerating and will contribute significantly to global sea level rise during the 21st century. Instrumental data on GrIS melting only cover the last few decades, and proxy data extending our knowledge into the past are vital for validating models predicting the influence of ongoing climate change. We investigated a potential meltwater proxy in Godthåbsfjord (West Greenland), where glacier meltwater causes seasonal excursions with lower oxygen isotope water (δ18Ow) values and salinity. The blue mussel (Mytilus edulis) potentially records these variations, because it precipitates its shell calcite in oxygen isotopic equilibrium with ambient seawater. As M. edulis shells are known to occur in raised shorelines and archaeological shell middens from previous Holocene warm periods, this species may be ideal in reconstructing past meltwater dynamics. We investigate its potential as a palaeo-meltwater proxy. First, we confirmed that M. edulis shell calcite oxygen isotope (δ18Oc) values are in equilibrium with ambient water and generally reflect meltwater conditions. Subsequently we investigated if this species recorded the full range of δ18Ow values occurring during the years 2007 to 2010. Results show that δ18Ow values were not recorded at very low salinities (< ~ 19), because the mussels appear to cease growing. This implies that Mytilus edulis δ18Oc values are suitable in reconstructing past meltwater amounts in most cases, but care has to be taken that shells are collected not too close to a glacier, but rather in the mid-region or mouth of the fjord. The focus of future research will expand on the geographical and temporal range of the shell measurements by sampling mussels in other fjords in Greenland along a south–north gradient, and by sampling shells from raised shorelines and archaeological shell middens from prehistoric settlements in Greenland.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Two stochastic models have been fitted to daily rainfall data for an interior station of Brazil. Of these two models, the results show a better fit to describe the data, by truncated negative probability model in comparison with Markov chain probability model. Kolmogorov-Smirnov test is applied for significance for these models. © 1983 Springer-Verlag.
Resumo:
Este trabalho objetivou predizer parâmetros da estrutura de associações macrobentônicas (composição específica, abundância, riqueza, diversidade e equitatividade) em estuários do Sul do Brasil, utilizando modelos baseados em dados ambientais (características dos sedimentos, salinidade, temperaturas do ar e da água, e profundidade). As amostragens foram realizadas sazonalmente em cinco estuários entre o inverno de 1996 e o verão de 1998. Em cada estuário as amostras foram coletadas em áreas não poluídas, com características semelhantes quanto a presença ou ausência de vegetação, profundidade e distância da desenbocadura. Para a obtenção dos modelos de predição, foram utilizados dois métodos: o primeiro baseado em Análise Discriminante Múltipla (ADM) e o segundo em Regressão Linear Múltipla (RLM). Os modelos baseados em ADM apresentaram resultados melhores do que os baseados em regressão linear. Os melhores resultados usando RLM foram obtidos para diversidade e riqueza. É possível então, concluir que modelos como aqui derivados podem representar ferramentas muito úteis em estudos de monitoramento ambiental em estuários.