954 resultados para Presence-only data


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modelling species distributions with presence data from atlases, museum collections and databases is challenging. In this paper, we compare seven procedures to generate pseudoabsence data, which in turn are used to generate GLM-logistic regressed models when reliable absence data are not available. We use pseudo-absences selected randomly or by means of presence-only methods (ENFA and MDE) to model the distribution of a threatened endemic Iberian moth species (Graellsia isabelae). The results show that the pseudo-absence selection method greatly influences the percentage of explained variability, the scores of the accuracy measures and, most importantly, the degree of constraint in the distribution estimated. As we extract pseudo-absences from environmental regions further from the optimum established by presence data, the models generated obtain better accuracy scores, and over-prediction increases. When variables other than environmental ones influence the distribution of the species (i.e., non-equilibrium state) and precise information on absences is non-existent, the random selection of pseudo-absences or their selection from environmental localities similar to those of species presence data generates the most constrained predictive distribution maps, because pseudo-absences can be located within environmentally suitable areas. This study showsthat ifwe do not have reliable absence data, the method of pseudo-absence selection strongly conditions the obtained model, generating different model predictions in the gradient between potential and realized distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We apply the Coexistence Approach (CoA) to reconstruct mean annual precipitation (MAP), mean annual temperature (MAT), mean temperature of thewarmestmonth (MTWA) and mean temperature of the coldest month (MTCO) at 44 pollen sites on the Qinghai–Tibetan Plateau. The modern climate ranges of the taxa are obtained (1) from county-level presence/absence data and (2) from data on the optimum and range of each taxon from Lu et al. (2011). The CoA based on the optimumand range data yields better predictions of observed climate parameters at the pollen sites than that based on the county-level data. The presence of arboreal pollen, most of which is derived fromoutside the region, distorts the reconstructions. More reliable reconstructions are obtained using only the non-arboreal component of the pollen assemblages. The root mean-squared error (RMSE) of the MAP reconstructions are smaller than the RMSE of MAT, MTWA and MTCO, suggesting that precipitation gradients are the most important control of vegetation distribution on the Qinghai–Tibetan Plateau. Our results show that CoA could be used to reconstruct past climates in this region, although in areas characterized by open vegetation the most reliable estimates will be obtained by excluding possible arboreal contaminants.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since its synthesis over 48 years rifampicin has been extensively studied. The literature reports the characterization of thermal events for rifampicin in nitrogen atmosphere, however, no characterization in synthetic air atmosphere. This paper aims to contribute to the thermal study of rifampicin through thermal (TG / DTG, DTA, DSC and DSC - FOTOVISUAL ) and non-thermal (HPLC, XRPD , IR - FTIR , PCA) and its main degradation products ( rifampicin quinone , rifampicin N-oxide 3- formylrifamicin). Rifampicin study was characterized as polymorph form II from techniques DSC, IR and XRPD. TG curves for rifampicin in synthetic air atmosphere showed higher thermal stability than those in N2, when analyzed Ti and Ea. There was characterized as overlapping events melting and recrystallization under N2 with weight loss in the TG curve, suggesting concomitant decomposition. Images DSCFotovisual showed no fusion event and showed darkening of the sample during analysis. The DTA curve in synthetic air atmosphere was visually different from DTA and DSC curves under N2, suggesting the absence of recrystallization and melting or presence only decomposition. The IV - FTIR analysis along with PCA analysis and HPLC and thermal data suggest that rifampicin for their fusion is concomitant decomposition of the sample in N2 and fusion events and recrystallization do not occur in synthetic air atmosphere. Decomposition products studied in an air atmosphere showed no melting event and presented simultaneously to the decomposition initiation of heating after process loss of water and / or solvent, varying the Ti initiating events. The Coats - Redfern , Madsudhanan , Van Krevelen and Herwitz - Mertzger kinetic parameters for samples , through the methods of OZAWA , in an atmosphere of synthetic air and / or N2 rifampicin proved more stable than its degradation products . The kinetic data showed good correlation between the different models employed. In this way we contribute to obtaining information that may assist studies of pharmaceutical compatibility and stability of substances

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this work was to investigate the factors that inhibit the use of Environmental Techniques in the Gas Station of the city of Natal/RN. For this, a survey with the aid of a questionnaire was used like research instrument. It s used a sample for convenience, not probabilistic. For collection of the data, it was used directly application of the questionnaire to the Managers or Assistant managers of the gas station, in accordance with its availability or presence. The data was collected in all the regions of Natal (North, South, East and West). The population in accordance with the data of the ANP of September 2005 is of 111 ranks and the collected sample was of 86. To carry through the analysis of the data of this research had been used softwares Excel and Statistic version 5.0, for Windows. The analysis of data is divided in two parts; descriptive analysis and analysis of groupings (clusters). The results showed that bigger part of the interviewed ones has between 30 and 39 years of age; they have second grade completed; they had declared to have little between and reasonable knowledge how much to the use of Clean Technology (CT) in gas station; and a small part of the interviewed ones had informed to have much knowledge how much the resolutions of the CONAMA established for the Gas Station. Of the searched ranks, the majority is national(76.7%); the most accurate practice environmental used in the gas station are: it collects selective of oil used or contaminated and ecological tanks - coated with strengthened fibre glass; great part of the interviewed ones (33.8%) informed that never the TL makes planning of referring future action; about of the half of the interviewed ones (84.9%) they had more declared that its employees have of none to a reasonable level of training for deal with problems that compromise the environment; the majority of the ranks (72.1%) functions has for more then six years. It is observed that almost all the interviewed ones (96.5%) evaluate as being important or very important the implantation of CT in Gas Station and the great majority (82.1%) evaluates the difficulty in if implanting these technologies in Gas Station as being easy or very easy. In the analysis of cluster, it was verified existence of two groupings (as much in the variable of the barriers and benefits), being that inside of each clusters exists homogeneity and between clusters exists heterogeneity. In reality, everything is important or very important in the opinion of the interviewed ones. There only exists a small significant difference that separates them in clusters

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The meta-analysis was used to evaluate the performance of piglets in post-weaning period, without imposition of sanitary challenge and fed diets containing blood plasma, obtained by spray-dried process (SDBP). Piglets are faced with normal challenges in post-weaning period such as environmental stress and the substitution of the liquid diet to a solid one. References regarding sanitary challenges were disregarded in this study. Only data regarding normal and expected challenges were considered. Data were obtained from indexed journals with information extracted from the material, methods and results sections of pre-selected scientific articles. First, the database was analyzed graphically to observe the distribution of data and presence of outliers. Afterwards correlation analysis and variance-covariance analyses were carried out. The database contained a total of 23 articles. The average initial weight of the piglets was 8.02. kg (4.00-9.28. kg) and the average initial age was 27 days (14-32 days). The average duration of feeding diets containing spray-dried blood plasma (SDBP) was 9 days (6-28 days). SDBP increased the feed conversion by 20.2% (P<0.05) during the initial period. Feed conversion during the total period was 10.2% higher (P<0.05) for animals fed with SDBP. Average daily weight gain and daily feed intake were not affected (P>0.05) during the entire period, but average daily gain was higher (P<0.05) for animals fed with SDBP during the initial period. The initial age of supplementation influenced the average daily weight gain and average daily feed intake of animals fed with SDBP. Better results were obtained than those obtained for animals up to 35 days of age fed diets without added SDBP supplementation. In early post-weaning period for piglets weaned up to 35 days of age, the SDBP supplementation positively influenced the average daily weight gain and feed conversion. © 2013 Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The need for biodiversity conservation is increasing at a rate much faster than the acquisition of knowledge of biodiversity, such as descriptions of new species and mapping species distributions. As global changes are winning the race against the acquisition of knowledge, many researchers resort to the use of surrogate groups to aid in conservation decisions. Reductions in taxonomic and numerical resolution are also desirable, because they could allow more rapid the acquisition of knowledge while requiring less effort, if little important information is lost. In this study, we evaluated the congruence among 22 taxonomic groups sampled in a tropical forest in the Amazon basin. Our aim was to evaluate if any of these groups could be used as surrogates for the others in monitoring programs. We also evaluated if the taxonomic or numerical resolution of possible surrogates could be reduced without greatly reducing the overall congruence. Congruence among plant groups was high, whereas the congruence among most animal groups was very low, except for anurans in which congruence values were only slightly lower than for plants. Liana (Bignoniaceae) was the group with highest congruence, even using genera presence-absence data. The congruence among groups was related to environmental factors, specifically the clay and phosphorous contents of soil. Several groups showed strong spatial clumping, but this was unrelated to the congruence among groups. The high degree of congruence of lianas with the other groups suggests that it may be a reasonable surrogate group, mainly for the other plant groups analyzed, if soil data are not available. Although lianas are difficult to count and identify, the number of studies on the ecology of lianas is increasing. Most of these studies have concluded that lianas are increasing in abundance in tropical forests. In addition to the high congruence, lianas are worth monitoring in their own right because they are sensitive to global warming and the increasing frequency and severity of droughts in tropical regions. Our findings suggest that the use of data on surrogate groups with relatively low taxonomic and numerical resolutions can be a reliable shortcut for biodiversity assessments, especially in megadiverse areas with high rates of habitat conversion, where the lack of biodiversity knowledge is pervasive. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With most clinical trials, missing data presents a statistical problem in evaluating a treatment's efficacy. There are many methods commonly used to assess missing data; however, these methods leave room for bias to enter the study. This thesis was a secondary analysis on data taken from TIME, a phase 2 randomized clinical trial conducted to evaluate the safety and effect of the administration timing of bone marrow mononuclear cells (BMMNC) for subjects with acute myocardial infarction (AMI).^ We evaluated the effect of missing data by comparing the variance inflation factor (VIF) of the effect of therapy between all subjects and only subjects with complete data. Through the general linear model, an unbiased solution was made for the VIF of the treatment's efficacy using the weighted least squares method to incorporate missing data. Two groups were identified from the TIME data: 1) all subjects and 2) subjects with complete data (baseline and follow-up measurements). After the general solution was found for the VIF, it was migrated Excel 2010 to evaluate data from TIME. The resulting numerical value from the two groups was compared to assess the effect of missing data.^ The VIF values from the TIME study were considerably less in the group with missing data. By design, we varied the correlation factor in order to evaluate the VIFs of both groups. As the correlation factor increased, the VIF values increased at a faster rate in the group with only complete data. Furthermore, while varying the correlation factor, the number of subjects with missing data was also varied to see how missing data affects the VIF. When subjects with only baseline data was increased, we saw a significant rate increase in VIF values in the group with only complete data while the group with missing data saw a steady and consistent increase in the VIF. The same was seen when we varied the group with follow-up only data. This essentially showed that the VIFs steadily increased when missing data is not ignored. When missing data is ignored as with our comparison group, the VIF values sharply increase as correlation increases.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Spatial data are being increasingly used in a wide range of disciplines, a fact that is clearly reflected in the recent trend to add spatial dimensions to the conventional social sciences. Economics is by no means an exception. On one hand, spatial data are indispensable to many branches of economics such as economic geography, new economic geography, or spatial economics. On the other hand, macroeconomic data are becoming available at more and more micro levels, so that academics and analysts take it for granted that they are available not only for an entire country, but also for more detailed levels (e.g. state, province, and even city). The term ‘spatial economics data’ as used in this report refers to any economic data that has spatial information attached. This spatial information can be the coordinates of a location at best or a less precise place name as is used to describe administrative units. Obviously, the latter cannot be used without a map of corresponding administrative units. Maps are therefore indispensible to the analysis of spatial economic data without absolute coordinates. The aim of this report is to review the availability of spatial economic data that pertains specifically to Laos and academic studies conducted on such data up to the present. In regards to the availability of spatial economic data, efforts have been made to identify not only data that has been made available as geographic information systems (GIS) data, but also those with sufficient place labels attached. The rest of the report is organized as follows. Section 2 reviews the maps available for Laos, both in hard copy and editable electronic formats. Section 3 summarizes the spatial economic data available for Laos at the present time, and Section 4 reviews and categorizes the many economic studies utilizing these spatial data. Section 5 give examples of some of the spatial industrial data collected for this research. Section 6 provides a summary of the findings and gives some indication of the direction of the final report due for completion in fiscal 2010.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Complex systems in causal relationships are known to be circular rather than linear; this means that a particular result is not produced by a single cause, but rather that both positive and negative feedback processes are involved. However, although interpreting systemic interrelationships requires a language formed by circles, this has only been developed at the diagram level, and not from an axiomatic point of view. The first difficulty encountered when analysing any complex system is that usually the only data available relate to the various variables, so the first objective was to transform these data into cause-and-effect relationships. Once this initial step was taken, our discrete chaos theory could be applied by finding the causal circles that will form part of the system attractor and allow their behavior to be interpreted. As an application of the technique presented, we analyzed the system associated with the transcription factors of inflammatory diseases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Lontra Euroasiática foi alvo de quatro prospeções na Península Ibérica (1990-2008). Em 2003, foi publicado um modelo de distribuição da lontra, com base nos dados de presença/ausência das prospeções publicadas em 1998. Dadas as suas características, este tipo de modelos pode tornar-se um elemento chave nas estratégias de recuperação da lontra como também, de outras espécies, se comprovada a sua fiabilidade e capacidade de antecipar tendências na distribuição das mesmas. Assim, esta dissertação confrontou as previsões do modelo com os dados de distribuição de 2008, a fim de identificar potências áreas de discordância. Os resultados revelam que, o modelo de distribuição de lontra proposto, apesar de ter por base dados de 1998 e de não considerar explicitamente processos biológicos, conseguiu captar o essencial da relação espécie-ambiente, resultando num bom desempenho preditivo para a distribuição da mesma em Espanha, uma década depois da sua construção; Evolution of otter (Lutra lutra L.) distribution in the Iberian Peninsula: Models at different scales and their projection through space and time Abstract: The Eurasian otter was already surveyed four times in the Iberian Peninsula (1990-2008). In 2003, a distribution model for the otter based on presence/absence data from the survey published in 1998, was published. This type of models has advantages that can make it in a key element for otter conservation strategies and also, for other species, but only, if their reliability and capability to predict species distribution tendencies are validated. The present thesis compares the model predictions with 2008 data, in order to find potential mismatch areas. Results suggest that, although the distribution model for the otter was based on data from 1998 and, doesn’t include explicitly biological mechanisms, it managed to correctly identify the essence of the species-environment relationship, what was translated in a good predictive performance for its actual distribution in Spain, after a decade of its construction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transferring distribution models between different geographical areas may be problematic, as the performance of models outside their original scope is hard to predict. A modelling procedure is needed that gets the gist of the environmental descriptors of a distribution area, without either overfitting to the training data or overestimating the species’ distribution potential.We tested the transferability power of the favourability function, a generalized linear model, on the distribution of the Iberian desman (Galemys pyrenaicus) in the Iberian territories of Portugal and Spain.We also tested the effects of two of the main potential constraints on model transferability: the analysed ranges of the predictor variables, and the completeness of the species distribution data. We modelled 10 km×10km presence/absence data from Portugal and Spain separately, extrapolated each model to the other country, and compared predictions with observations. The Spanish model, despite arguably containing more false absences, showed good predictive ability in Portugal. The Portuguese model, whose predictors ranged between only a subset of the values observed in Spain, overestimated desman distribution when transferred.We discuss possible reasons for this differential model behaviour, and highlight the importance of this kind of models for prediction and conservation applications

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Logistic regression is a statistical tool widely used for predicting species’ potential distributions starting from presence/absence data and a set of independent variables. However, logistic regression equations compute probability values based not only on the values of the predictor variables but also on the relative proportion of presences and absences in the dataset, which does not adequately describe the environmental favourability for or against species presence. A few strategies have been used to circumvent this, but they usually imply an alteration of the original data or the discarding of potentially valuable information. We propose a way to obtain from logistic regression an environmental favourability function whose results are not affected by an uneven proportion of presences and absences. We tested the method on the distribution of virtual species in an imaginary territory. The favourability models yielded similar values regardless of the variation in the presence/absence ratio. We also illustrate with the example of the Pyrenean desman’s (Galemys pyrenaicus) distribution in Spain. The favourability model yielded more realistic potential distribution maps than the logistic regression model. Favourability values can be regarded as the degree of membership of the fuzzy set of sites whose environmental conditions are favourable to the species, which enables applying the rules of fuzzy logic to distribution modelling. They also allow for direct comparisons between models for species with different presence/absence ratios in the study area. This makes themmore useful to estimate the conservation value of areas, to design ecological corridors, or to select appropriate areas for species reintroductions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: To ascertain the effectiveness of object-centered three-dimensional representations for the modeling of corneal surfaces. Methods: Three-dimensional (3D) surface decomposition into series of basis functions including: (i) spherical harmonics, (ii) hemispherical harmonics, and (iii) 3D Zernike polynomials were considered and compared to the traditional viewer-centered representation of two-dimensional (2D) Zernike polynomial expansion for a range of retrospective videokeratoscopic height data from three clinical groups. The data were collected using the Medmont E300 videokeratoscope. The groups included 10 normal corneas with corneal astigmatism less than −0.75 D, 10 astigmatic corneas with corneal astigmatism between −1.07 D and 3.34 D (Mean = −1.83 D, SD = ±0.75 D), and 10 keratoconic corneas. Only data from the right eyes of the subjects were considered. Results: All object-centered decompositions led to significantly better fits to corneal surfaces (in terms of the RMS error values) than the corresponding 2D Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters (2, 4, 6, and 8 mm), and model orders (4th to 10th radial orders) The best results (smallest RMS fit error) were obtained with spherical harmonics decomposition which lead to about 22% reduction in the RMS fit error, as compared to the traditional 2D Zernike polynomials. Hemispherical harmonics and the 3D Zernike polynomials reduced the RMS fit error by about 15% and 12%, respectively. Larger reduction in RMS fit error was achieved for smaller corneral diameters and lower order fits. Conclusions: Object-centered 3D decompositions provide viable alternatives to traditional viewer-centered 2D Zernike polynomial expansion of a corneal surface. They achieve better fits to videokeratoscopic height data and could be particularly suited to the analysis of multiple corneal measurements, where there can be slight variations in the position of the cornea from one map acquisition to the next.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Plant biosecurity requires statistical tools to interpret field surveillance data in order to manage pest incursions that threaten crop production and trade. Ultimately, management decisions need to be based on the probability that an area is infested or free of a pest. Current informal approaches to delimiting pest extent rely upon expert ecological interpretation of presence / absence data over space and time. Hierarchical Bayesian models provide a cohesive statistical framework that can formally integrate the available information on both pest ecology and data. The overarching method involves constructing an observation model for the surveillance data, conditional on the hidden extent of the pest and uncertain detection sensitivity. The extent of the pest is then modelled as a dynamic invasion process that includes uncertainty in ecological parameters. Modelling approaches to assimilate this information are explored through case studies on spiralling whitefly, Aleurodicus dispersus and red banded mango caterpillar, Deanolis sublimbalis. Markov chain Monte Carlo simulation is used to estimate the probable extent of pests, given the observation and process model conditioned by surveillance data. Statistical methods, based on time-to-event models, are developed to apply hierarchical Bayesian models to early detection programs and to demonstrate area freedom from pests. The value of early detection surveillance programs is demonstrated through an application to interpret surveillance data for exotic plant pests with uncertain spread rates. The model suggests that typical early detection programs provide a moderate reduction in the probability of an area being infested but a dramatic reduction in the expected area of incursions at a given time. Estimates of spiralling whitefly extent are examined at local, district and state-wide scales. The local model estimates the rate of natural spread and the influence of host architecture, host suitability and inspector efficiency. These parameter estimates can support the development of robust surveillance programs. Hierarchical Bayesian models for the human-mediated spread of spiralling whitefly are developed for the colonisation of discrete cells connected by a modified gravity model. By estimating dispersal parameters, the model can be used to predict the extent of the pest over time. An extended model predicts the climate restricted distribution of the pest in Queensland. These novel human-mediated movement models are well suited to demonstrating area freedom at coarse spatio-temporal scales. At finer scales, and in the presence of ecological complexity, exploratory models are developed to investigate the capacity for surveillance information to estimate the extent of red banded mango caterpillar. It is apparent that excessive uncertainty about observation and ecological parameters can impose limits on inference at the scales required for effective management of response programs. The thesis contributes novel statistical approaches to estimating the extent of pests and develops applications to assist decision-making across a range of plant biosecurity surveillance activities. Hierarchical Bayesian modelling is demonstrated as both a useful analytical tool for estimating pest extent and a natural investigative paradigm for developing and focussing biosecurity programs.