905 resultados para sampling effort


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim Estimates of geographic range size derived from natural history museum specimens are probably biased for many species. We aim to determine how bias in these estimates relates to range size. Location We conducted computer simulations based on herbarium specimen records from localities ranging from the southern United States to northern Argentina. Methods We used theory on the sampling distribution of the mean and variance to develop working hypotheses about how range size, defined as area of occupancy (AOO), was related to the inter-specific distribution of: (1) mean collection effort per area across the range of a species (MC); (2) variance in collection effort per area across the range of a species (VC); and (3) proportional bias in AOO estimates (PBias: the difference between the expected value of the estimate of AOO and true AOO, divided by true AOO). We tested predictions from these hypotheses using computer simulations based on a dataset of more than 29,000 herbarium specimen records documenting occurrences of 377 plant species in the tribe Bignonieae (Bignoniaceae). Results The working hypotheses predicted that the mean of the inter-specific distribution of MC, VC and PBias were independent of AOO, but that the respective variance and skewness decreased with increasing AOO. Computer simulations supported all but one prediction: the variance of the inter-specific distribution of VC did not decrease with increasing AOO. Main conclusions Our results suggest that, despite an invariant mean, the dispersion and symmetry of the inter-specific distribution of PBias decreases as AOO increases. As AOO increased, range size was less severely underestimated for a large proportion of simulated species. However, as AOO increased, range size estimates having extremely low bias were less common.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conservation of biodiversity in agricultural landscapes depends on information about the ways in which species are affected by the conversion of native habitats into novel anthropogenic environments and the strategies that the species use to persist in these altered ecosystems. Here, we investigate how small mammals occupy the different agroecosystems of an agricultural landscape in the state of Sao Paulo, Brazil. From August 2003 through January 2005, we surveyed small mammals using Sherman traps at 16 sampling sites in each of the four predominant environments of the local agricultural landscape: remnant fragments of semideciduous forest, Eucalyptus plantations, sugarcane plantations, and pastures. With a total effort of 23,040 trap-nights and a capture success of 0.8%, we captured 177 rodents and marsupials belonging to eight species. The assemblage represented by these mammals is essentially composed of generalist species, which are common in degraded areas. Sugarcane plantations had the highest abundance, whereas pastures had the lowest species richness. Our results suggest that the loss of forest species can be related to the loss of native forest. The results also indicate that to improve the conservation value of agricultural landscapes, native forest fragments should be conserved, extensive monocultures should be avoided and agricultural impacts should be mitigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Background Air pollution in São Paulo is constantly being measured by the State of Sao Paulo Environmental Agency, however there is no information on the variation between places with different traffic densities. This study was intended to identify a gradient of exposure to traffic-related air pollution within different areas in São Paulo to provide information for future epidemiological studies. Methods We measured NO2 using Palmes' diffusion tubes in 36 sites on streets chosen to be representative of different road types and traffic densities in São Paulo in two one-week periods (July and August 2000). In each study period, two tubes were installed in each site, and two additional tubes were installed in 10 control sites. Results Average NO2 concentrations were related to traffic density, observed on the spot, to number of vehicles counted, and to traffic density strata defined by the city Traffic Engineering Company (CET). Average NO2concentrations were 63μg/m3 and 49μg/m3 in the first and second periods, respectively. Dividing the sites by the observed traffic density, we found: heavy traffic (n = 17): 64μg/m3 (95% CI: 59μg/m3 – 68μg/m3); local traffic (n = 16): 48μg/m3 (95% CI: 44μg/m3 – 52μg/m3) (p < 0.001). Conclusion The differences in NO2 levels between heavy and local traffic sites are large enough to suggest the use of a more refined classification of exposure in epidemiological studies in the city. Number of vehicles counted, traffic density observed on the spot and traffic density strata defined by the CET might be used as a proxy for traffic exposure in São Paulo when more accurate measurements are not available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dasyatis guttata has been target of artisanal fisheries in the coast of Bahia (Northeast Brazil) mainly by “arraieira” (gillnet) and “grozeira” (bottom long-line), but until now there is no stock assessment study. One of the important data for this knowledge is reliable indices of abundance. The aims of the present work are to: (1) estimate the best predictor for relative abundance (catch-per-unit-of-effort, CPUE), examining whether catch (production – kg) was related to: soak time of the gear, size of the gillnet or number of hooks, applying generalized linear model (GLM); (2) estimate the annual CPUE (kg/hooks and kg/m) averaged by gear; and (3) assess the temporal CPUE variance. Based on monthly sampling between January 2012 and January 2013, 222 landings by grozeira and 76 by arraiaiera were recorded in the two landing sites in Todos os Santos Bay, Bahia. A total of 14,550 kg (average = 44 kg/month) of D. guttata was captured. Models for both gears were highly significant (P < 0.0001). The analysis indicated that the most appropriate variable for CPUE analysis was the size of the gillnet (P < 0.001) and the number of hooks (P < 0.0001). Soak time of the gear was not significant for both gears (P = 0.4). High residual deviance expresses the complexity of the relations between ecosystem factors and other fisheries factors affecting relative abundance, which were not considered in this study. The average CPUE by grozeira was 6.39 kg/100 hooks ± 8.89 and by arraieira, 1.47 kg/100 m ± 1.66 over the year. Kruskal-Wallis test showed effect of the month on the mean grozeira CPUE (P = <0.001), but no effect (P = 0.096) on the mean arraieira CPUE. Grozeira CPUE values were highest in December and March, and lowest between May to August

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis covers sampling and analytical procedures for isocyanates (R-NCO) and amines (R-NH2), two kinds of chemicals frequently used in association with the polymeric material polyurethane (PUR). Exposure to isocyanates may result in respiratory disorders and dermal sensitisation, and they are one of the main causes of occupational asthma. Several of the aromatic diamines associated with PUR production are classified as suspected carcinogens. Hence, the presence of these chemicals in different exposure situations must be monitored. In the context of determining isocyanates in air, the methodologies included derivatisation with the reagent di-n-butylamine (DBA) upon collection and subsequent determination using liquid chromatography (LC) and mass spectrometric detection (MS). A user-friendly solvent-free sampler for collection of airborne isocyanates was developed as an alternative to a more cumbersome impinger-filter sampling technique. The combination of the DBA reagent together with MS detection techniques revealed several new exposure situations for isocyanates, such as isocyanic acid during thermal degradation of PUR and urea-based resins. Further, a method for characterising isocyanates in technical products used in the production of PUR was developed. This enabled determination of isocyanates in air for which pure analytical standards are missing. Tandem MS (MS/MS) determination of isocyanates in air below 10-6 of the threshold limit values was achieved. As for the determination of amines, the analytical methods included derivatisation into pentafluoropropionic amide or ethyl carbamate ester derivatives and subsequent MS analysis. Several amines in biological fluids, as markers of exposure for either the amines themselves or the corresponding isocyanates, were determined by LC-MS/MS at amol level. In aqueous extraction solutions of flexible PUR foam products, toluene diamine and related compounds were found. In conclusion, this thesis demonstrates the usefulness of well characterised analytical procedures and techniques for determination of hazardous compounds. Without reliable and robust methodologies there is a risk that exposure levels will be underestimated or, even worse, that relevant compounds will be completely missed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabajo realizado por: Maldonado, F.; Packard, T.; Gómez, M.; Santana Rodríguez, J. J

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This PhD Thesis is the result of my research activity in the last three years. My main research interest was centered on the evolution of mitochondrial genome (mtDNA), and on its usefulness as a phylogeographic and phylogenetic marker at different taxonomic levels in different taxa of Metazoa. From a methodological standpoint, my main effort was dedicated to the sequencing of complete mitochondrial genomes, and the approach to whole-genome sequencing was based on the application of Long-PCR and shotgun sequences. Moreover, this research project is a part of a bigger sequencing project of mtDNAs in many different Metazoans’ taxa, and I mostly dedicated myself to sequence and analyze mtDNAs in selected taxa of bivalves and hexapods (Insecta). Sequences of bivalve mtDNAs are particularly limited, and my study contributed to extend the sampling. Moreover, I used the bivalve Musculista senhousia as model taxon to investigate the molecular mechanisms and the evolutionary significance of their aberrant mode of mitochondrial inheritance (Doubly Uniparental Inheritance, see below). In Insects, I focused my attention on the Genus Bacillus (Insecta Phasmida). A detailed phylogenetic analysis was performed in order to assess phylogenetic relationships within the genus, and to investigate the placement of Phasmida in the phylogenetic tree of Insecta. The main goal of this part of my study was to add to the taxonomic coverage of sequenced mtDNAs in basal insects, which were only partially analyzed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proper ion channels’ functioning is a prerequisite for a normal cell and disorders involving ion channels, or channelopathies, underlie many human diseases. Long QT syndromes (LQTS) for example may arise from the malfunctioning of hERG channel, caused either by the binding of drugs or mutations in HERG gene. In the first part of this thesis I present a framework to investigate the mechanism of ion conduction through hERG channel. The free energy profile governing the elementary steps of ion translocation in the pore was computed by means of umbrella sampling simulations. Compared to previous studies, we detected a different dynamic behavior: according to our data hERG is more likely to mediate a conduction mechanism which has been referred to as “single-vacancy-like” by Roux and coworkers (2001), rather then a “knock-on” mechanism. The same protocol was applied to a model of hERG presenting the Gly628Ser mutation, found to be cause of congenital LQTS. The results provided interesting insights about the reason of the malfunctioning of the mutant channel. Since they have critical functions in viruses’ life cycle, viral ion channels, such as M2 proton channel, are considered attractive targets for antiviral therapy. A deep knowledge of the mechanisms that the virus employs to survive in the host cell is of primary importance in the identification of new antiviral strategies. In the second part of this thesis I shed light on the role that M2 plays in the control of electrical potential inside the virus, being the charge equilibration a condition required to allow proton influx. The ion conduction through M2 was simulated using metadynamics technique. Based on our results we suggest that a potential anion-mediated cation-proton exchange, as well as a direct anion-proton exchange could both contribute to explain the activity of the M2 channel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary PhD Thesis Jan Pollmann: This thesis focuses on global scale measurements of light reactive non-methane hydrocarbon (NMHC), in the volatility range from ethane to toluene with a special focus on ethane, propane, isobutane, butane, isopentane and pentane. Even though they only occur at the ppt level (nmol mol-1) in the remote troposphere these species can yield insight into key atmospheric processes. An analytical method was developed and subsequently evaluated to analyze NMHC from the NOAA – ERSL cooperative air sampling network. Potential analytical interferences through other atmospheric trace gases (water vapor and ozone) were carefully examined. The analytical parameters accuracy and precision were analyzed in detail. It was proven that more than 90% of the data points meet the Global Atmospheric Watch (GAW) data quality objective. Trace gas measurements from 28 measurement stations were used to derive the global atmospheric distribution profile for 4 NMHC (ethane, propane, isobutane, butane). A close comparison of the derived ethane data with previously published reports showed that northern hemispheric ethane background mixing ratio declined by approximately 30% since 1990. No such change was observed for southern hemispheric ethane. The NMHC data and trace gas data supplied by NOAA ESRL were used to estimate local diurnal averaged hydroxyl radical (OH) mixing ratios by variability analysis. Comparison of the variability derived OH with directly measured OH and modeled OH mixing ratios were found in good agreement outside the tropics. Tropical OH was on average two times higher than predicted by the model. Variability analysis was used to assess the effect of chlorine radicals on atmospheric oxidation chemistry. It was found that Cl is probably not of significant relevance on a global scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.