871 resultados para Large-scale Distribution
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
1st European IAHR Congress,6-4 May, Edinburg, Scotland
Resumo:
Chagas disease is a chronic, tropical, parasitic disease, endemic throughout Latin America. The large-scale migration of populations has increased the geographic distribution of the disease and cases have been observed in many other countries around the world. To strengthen the critical mass of knowledge generated in different countries, it is essential to promote cooperative and translational research initiatives. We analyzed authorship of scientific documents on Chagas disease indexed in the Medline database from 1940 to 2009. Bibliometrics was used to analyze the evolution of collaboration patterns. A Social Network Analysis was carried out to identify the main research groups in the area by applying clustering methods. We then analyzed 13,989 papers produced by 21,350 authors. Collaboration among authors dramatically increased over the study period, reaching an average of 6.2 authors per paper in the last five-year period. Applying a threshold of collaboration of five or more papers signed in co-authorship, we identified 148 consolidated research groups made up of 1,750 authors. The Chagas disease network identified constitutes a "small world," characterized by a high degree of clustering and a notably high number of Brazilian researchers.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
The influence of the large-scale climatic variability dominant modes in the Pacific and in the Atlantic on Amazonian rainfall is investigated. The composite technique of the Amazon precipitation anomalies is used in this work. The basis years for these composites arc those in the period 1960-1998 with occurrences of extremes in the Southern Oscillation (El Niño or La Niña) and the north/south warm (or cold) sea surface temperature (SST) anomalies dipole pattern in the tropical Atlantic. Warm (cold) dipole means positive (negative) anomalies in the tropical North Atlantic and negative (positive) anomalies in the tropical South Atlantic. Austral summer and autumn composites for extremes in the Southern Oscillation (El Niño or La Niña) and independently for north/south dipole pattern (warm or cold) of the SST anomalies in the tropical Atlantic present values (magnitude and sign) consistent with those found in previous works on the relationship between Amazon rainfall variations and the SST anomalies in the tropical Pacific and Atlantic. However, austral summer and autumn composites for the years with simultaneous occurrences of El Niño and warm north/south dipole of the SST anomalies in the tropical Atlantic show negative precipitation anomalies extending eastward over the center-eastern Amazon. This result indicates the important role played by the tropical Atlantic in the Amazon anomalous rainfall distribution.
Resumo:
The MAP-i doctoral program of the Universities of Minho, Aveiro and Porto
Resumo:
This study analyzed the influence of forest structural components on the occurence, size and density of groups of Bare-face Tamarin (Saguinus bicolor) - the most threatened species in the Amazon - and produced the first map of distribution of groups in large-scale spatial within the area of continuous forest. Population censuses were conducted between November 2002 and July 2003, covering 6400 hectares in the Ducke Reserve, Manaus-AM, Brazil. Groups of S. bicolor were recorded 41 times accordingly distributed in the environments: plateau (20); slopes (12); and lowlands (09). The mean group size was 4.8 indiv./group, and ranged from 2 to 11 individuals. In the sites where the groups were recorded, and in an equivalent number of sites where no tamarins were found located at least 500 m from those where they had been recorded, we placed 50 m x 50 m plots to record the following forest structural components: abundance of trees; abundance of lianas; abundance of fruiting trees and lianas; abundance of snags; abundance of logs; percentage of canopy opening; leaf litter depth; and altitude. Bare-face Tamarin more often uses areas with lower abundance of forest logs, smaller canopy opening and with higher abundance of snags, areas in the forest with smaller canopy opening present higher density of S. bicolor groups. Apparently this species does not use the forest in a random way, and may select areas for its daily activities depending on the micro-environmental heterogeneity produced by the forest structural components.
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
The assumption that climatic niche requirements of invasive species are conserved between their native and invaded ranges is key to predicting the risk of invasion. However, this assumption has been challenged recently by evidence of niche shifts in some species. Here, we report the first large-scale test of niche conservatism for 50 terrestrial plant invaders between Eurasia, North America, and Australia. We show that when analog climates are compared between regions, fewer than 15% of species have more than 10% of their invaded distribution outside their native climatic niche. These findings reveal that substantial niche shifts are rare in terrestrial plant invaders, providing support for an appropriate use of ecological niche models for the prediction of both biological invasions and responses to climate change.
Resumo:
Tobacco consumption is a global epidemic responsible for a vast burden of disease. With pharmacological properties sought-after by consumers and responsible for addiction issues, nicotine is the main reason of this phenomenon. Accordingly, smokeless tobacco products are of growing popularity in sport owing to potential performance enhancing properties and absence of adverse effects on the respiratory system. Nevertheless, nicotine does not appear on the 2011 World Anti-Doping Agency (WADA) Prohibited List or Monitoring Program by lack of a comprehensive large-scale prevalence survey. Thus, this work describes a one-year monitoring study on urine specimens from professional athletes of different disciplines covering 2010 and 2011. A method for the detection and quantification of nicotine, its major metabolites (cotinine, trans-3-hydroxycotinine, nicotine-N'-oxide and cotinine-N-oxide) and minor tobacco alkaloids (anabasine, anatabine and nornicotine) was developed, relying on ultra-high pressure liquid chromatography coupled to triple quadrupole mass spectrometry (UHPLC-TQ-MS/MS). A simple and fast dilute-and-shoot sample treatment was performed, followed by hydrophilic interaction chromatography-tandem mass spectrometry (HILIC-MS/MS) operated in positive electrospray ionization (ESI) mode with multiple reaction monitoring (MRM) data acquisition. After method validation, assessing the prevalence of nicotine consumption in sport involved analysis of 2185 urine samples, accounting for 43 different sports. Concentrations distribution of major nicotine metabolites, minor nicotine metabolites and tobacco alkaloids ranged from 10 (LLOQ) to 32,223, 6670 and 538 ng/mL, respectively. Compounds of interest were detected in trace levels in 23.0% of urine specimens, with concentration levels corresponding to an exposure within the last three days for 18.3% of samples. Likewise, hypothesizing conservative concentration limits for active nicotine consumption prior and/or during sport practice (50 ng/mL for nicotine, cotinine and trans-3-hydroxycotinine and 25 ng/mL for nicotine-N'-oxide, cotinine-N-oxide, anabasine, anatabine and nornicotine) revealed a prevalence of 15.3% amongst athletes. While this number may appear lower than the worldwide smoking prevalence of around 25%, focusing the study on selected sports highlighted more alarming findings. Indeed, active nicotine consumption in ice hockey, skiing, biathlon, bobsleigh, skating, football, basketball, volleyball, rugby, American football, wrestling and gymnastics was found to range between 19.0 and 55.6%. Therefore, considering the adverse effects of smoking on the respiratory tract and numerous health threats detrimental to sport practice at top level, likelihood of smokeless tobacco consumption for performance enhancement is greatly supported.
Resumo:
The collection of dried blood spots (DBS) on filter paper provides a powerful approach for the development of large-scale, population-based screening programs. DBS methods are particularly valuable in developing countries and isolated rural regions where resources are limited. Large numbers of field specimens can be economically collected and shipped to centralized reference laboratories for genetic and (or) serological analysis. Alternatively, the dried blood can be stored and used as an archival resource to rapidly establish the frequency and distribution of newly recognized mutations, confirm patient identity or track the origins and emergence of newly identified pathogens. In this report, we describe how PCR-based technologies are beginning to interface with international screening programmes for the diagnosis and genetic characterization of human immunodeficiency virus type 1 (HIV-1). In particular, we review recent progress using DBS specimens to resolve the HIV-1 infection status of neonates, monitor the genetic evolution of HIV-1 during early infancy and establish a sentinel surveillance system for the systematic monitoring of HIV-1 genetic variation in Asia.
Resumo:
A large scale investigation on trypanorhynch cestode infestation of tropical marine fishes was carried out along the Northeast Brazilian coast in the summer of 1991 and 1993. A total of 798 fish specimens belonging to 57 species and 30 families were examined. Metacestodes of 11 different trypanorhynchs were found: Callitetrarhynchus gracilis, Dasyrhynchus giganteus, Grillotia sp., Nybelinia edwinlintoni, N. indica, N. senegalensis, Nybelinia c.f. lingualis, Otobothrium cysticum, Pseudolacistorhynchus noodti, Pseudotobothrium dipsacum and Pterobothrium kingstoni. Scanning electron microscopy was used to clarify details of the tentacular armature of some species. Rose-thorn shaped hooklets, regularly arranged like microtriches, are described from the bothridial surface of N. edwinlintoni. Of the 57 fish species, 15 harboured trypanorhynch cestodes. Of these the mullid Pseudupeneus maculatus was the most heavily infested fish species, harbouring 5 different trypanorhynch species. P. noodti in P. maculatus had the highest prevalence (87%) and intensity (maximum = 63) of infestation. C. gracilis was the parasite with the lowest host-specificity. It could be isolated from 10 fish species. The cestode fauna of the Northeast Brazilian coast appears to be similar to that of the West African coast. Five of the trypanorhynch cestodes found during this study are common to both localities. The two single cases of intra musculature infestation in Citharichthys spilopterus and Haemulon aurolineatum by trypanorhynch cestodes indicate that marketability of the investigated commercially exploited fish species is inconsequential.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.