838 resultados para Mesh generation from image data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The variability of the sea surface salinity (SSS) in the Indian Ocean is studied using a 100-year control simulation of the Community Climate System Model (CCSM 2.0). The monsoon-driven seasonal SSS pattern in the Indian Ocean, marked by low salinity in the east and high salinity in the west, is captured by the model. The model overestimates runoff int the Bay of Bengal due to higher rainfall over the Himalayan-Tibetan regions which drain into the Bay of Bengal through Ganga-Brahmaputra rivers. The outflow of low-salinity water from the Bay of Bengal is to strong in the model. Consequently, the model Indian Ocean SSS is about 1 less than that seen in the climatology. The seasonal Indian Ocean salt balance obtained from the model is consistent with the analysis from climatological data sets. During summer, the large freshwater input into the Bay of Bengal and its redistribution decide the spatial pattern of salinity tendency. During winter, horizontal advection is the dominant contributor to the tendency term. The interannual variability of the SSS in the Indian Ocean is about five times larger than that in coupled model simulations of the North Atlantic Ocean. Regions of large interannual standard deviations are located near river mouths in the Bay of Bengal and in the eastern equatorial Indian Ocean. Both freshwater input into the ocean and advection of this anomalous flux are responsible for the generation of these anomalies. The model simulates 20 significant Indian Ocean Dipole (IOD) events and during IOD years large salinity anomalies appear in the equatorial Indian Ocean. The anomalies exist as two zonal bands: negative salinity anomalies to the north of the equator and positive to the south. The SSS anomalies for the years in which IOD is not present and for ENSO years are much weaker than during IOD years. Significant interannual SSS anomalies appear in the Indian Ocean only during IOD years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The number of genetic factors associated with common human traits and disease is increasing rapidly, and the general public is utilizing affordable, direct-to-consumer genetic tests. The results of these tests are often in the public domain. A combination of factors has increased the potential for the indirect estimation of an individual's risk for a particular trait. Here we explain the basic principals underlying risk estimation which allowed us to test the ability to make an indirect risk estimation from genetic data by imputing Dr. James Watson's redacted apolipoprotein E gene (APOE) information. The principles underlying risk prediction from genetic data have been well known and applied for many decades, however, the recent increase in genomic knowledge, and advances in mathematical and statistical techniques and computational power, make it relatively easy to make an accurate but indirect estimation of risk. There is a current hazard for indirect risk estimation that is relevant not only to the subject but also to individuals related to the subject; this risk will likely increase as more detailed genomic data and better computational tools become available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image co­ordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is a step forward in discovering knowledge from databases of complex structure like tree or graph. Several data mining algorithms are developed based on a novel representation called Balanced Optimal Search for extracting implicit, unknown and potentially useful information like patterns, similarities and various relationships from tree data, which are also proved to be advantageous in analysing big data. This thesis focuses on analysing unordered tree data, which is robust to data inconsistency, irregularity and swift information changes, hence, in the era of big data it becomes a popular and widely used data model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Population pharmacokinetic models combined with multiple sets of age– concentration biomonitoring data facilitate back-calculation of chemical uptake rates from biomonitoring data. Objectives We back-calculated uptake rates of PBDEs for the Australian population from multiple biomonitoring surveys (top-down) and compared them with uptake rates calculated from dietary intake estimates of PBDEs and PBDE concentrations in dust (bottom-up). Methods Using three sets of PBDE elimination half-lives, we applied a population pharmacokinetic model to the PBDE biomonitoring data measured between 2002–2003 and 2010–2011 to derive the top-down uptake rates of four key PBDE congeners and six age groups. For the bottom-up approach, we used PBDE concentrations measured around 2005. Results Top-down uptake rates of Σ4BDE (the sum of BDEs 47, 99, 100, and 153) varied from 7.9 to 19 ng/kg/day for toddlers and from 1.2 to 3.0 ng/kg/day for adults; in most cases, they were—for all age groups—higher than the bottom-up uptake rates. The discrepancy was largest for toddlers with factors up to 7–15 depending on the congener. Despite different elimination half-lives of the four congeners, the age–concentration trends showed no increase in concentration with age and were similar for all congeners. Conclusions In the bottom-up approach, PBDE uptake is underestimated; currently known pathways are not sufficient to explain measured PBDE concentrations, especially in young children. Although PBDE exposure of toddlers has declined in the past years, pre- and postnatal exposure to PBDEs has remained almost constant because the mothers’ PBDE body burden has not yet decreased substantially.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behaviour of laterally loaded piles is considerably influenced by the uncertainties in soil properties. Hence probabilistic models for assessment of allowable lateral load are necessary. Cone penetration test (CPT) data are often used to determine soil strength parameters, whereby the allowable lateral load of the pile is computed. In the present study, the maximum lateral displacement and moment of the pile are obtained based on the coefficient of subgrade reaction approach, considering the nonlinear soil behaviour in undrained clay. The coefficient of subgrade reaction is related to the undrained shear strength of soil, which can be obtained from CPT data. The soil medium is modelled as a one-dimensional random field along the depth, and it is described by the standard deviation and scale of fluctuation of the undrained shear strength of soil. Inherent soil variability, measurement uncertainty and transformation uncertainty are taken into consideration. The statistics of maximum lateral deflection and moment are obtained using the first-order, second-moment technique. Hasofer-Lind reliability indices for component and system failure criteria, based on the allowable lateral displacement and moment capacity of the pile section, are evaluated. The geotechnical database from the Konaseema site in India is used as a case example. It is shown that the reliability-based design approach for pile foundations, considering the spatial variability of soil, permits a rational choice of allowable lateral loads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Unambiguous synthesis of 2-methyl-3-isopropenylanisole (Image ) and 2-isopropenyl-3-methylanisole (Image ) has led to revision, from (Image ) to (Image ), of the structure assigned to a monoterpene phenol ether isolated from

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is common to model the dynamics of fisheries using natural and fishing mortality rates estimated independently using two separate analyses. Fishing mortality is routinely estimated from widely available logbook data, whereas natural mortality estimations have often required more specific, less frequently available, data. However, in the case of the fishery for brown tiger prawn (Penaeus esculentus) in Moreton Bay, both fishing and natural mortality rates have been estimated from logbook data. The present work extended the fishing mortality model to incorporate an eco-physiological response of tiger prawn to temperature, and allowed recruitment timing to vary from year to year. These ecological characteristics of the dynamics of this fishery were ignored in the separate model that estimated natural mortality. Therefore, we propose to estimate both natural and fishing mortality rates within a single model using a consistent set of hypotheses. This approach was applied to Moreton Bay brown tiger prawn data collected between 1990 and 2010. Natural mortality was estimated by maximum likelihood to be equal to 0.032 ± 0.002 week−1, approximately 30% lower than the fixed value used in previous models of this fishery (0.045 week−1).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two methionyl-transfer RNA synthetases (A and B forms) have been isolated from Image . The homogeneous preparations of the enzymes showed 1500 fold increase in specific activity in aminoacylation of methionine specific tRNA. The A and B forms differed in their specificity of aminoacylation of tRNAmMet and tRNAfMet; enzyme B exhibited much higher specificity for tRNAfMet. The molecular activities of A and B enzymes for aminoacid and tRNA were identical. The turnover number for aminoacid was 27 fold greater than that for tRNA, while the Km values for tRNA were lower by a factor of 106 as compared to the aminoacid. Both the enzymes catalysed ATP-pyrophosphate exchange reaction to the same extent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change will influence the living conditions of all life on Earth. For some species the change in the environmental conditions that has occurred so far has already increased the risk of extinction, and the extinction risk is predicted to increase for large numbers of species in the future. Some species may have time to adapt to the changing environmental conditions, but the rate and magnitude of the change are too great to allow many species to survive via evolutionary changes. Species responses to climate change have been documented for some decades. Some groups of species, like many insects, respond readily to changes in temperature conditions and have shifted their distributions northwards to new climatically suitable regions. Such range shifts have been well documented especially in temperate zones. In this context, butterflies have been studied more than any other group of species, partly for the reason that their past geographical ranges are well documented, which facilitates species-climate modelling and other analyses. The aim of the modelling studies is to examine to what extent shifts in species distributions can be explained by climatic and other factors. Models can also be used to predict the future distributions of species. In this thesis, I have studied the response to climate change of one species of butterfly within one geographically restricted area. The study species, the European map butterfly (Araschnia levana), has expanded rapidly northwards in Finland during the last two decades. I used statistical and dynamic modelling approaches in combination with field studies to analyse the effects of climate warming and landscape structure on the expansion. I studied possible role of molecular variation in phosphoglucose isomerase (PGI), a glycolytic enzyme affecting flight metabolism and thereby flight performance, in the observed expansion of the map butterfly at two separate expansion fronts in Finland. The expansion rate of the map butterfly was shown to be correlated with the frequency of warmer than average summers during the study period. The result is in line with the greater probability of occurrence of the second generation during warm summers and previous results on this species showing greater mobility of the second than first generation individuals. The results of a field study in this thesis indicated low mobility of the first generation butterflies. Climatic variables alone were not sufficient to explain the observed expansion in Finland. There are also problems in transferring the climate model to new regions from the ones from which data were available to construct the model. The climate model predicted a wider distribution in the south-western part of Finland than what has been observed. Dynamic modelling of the expansion in response to landscape structure suggested that habitat and landscape structure influence the rate of expansion. In southern Finland the landscape structure may have slowed down the expansion rate. The results on PGI suggested that allelic variation in this enzyme may influence flight performance and thereby the rate of expansion. Genetic differences of the populations at the two expansion fronts may explain at least partly the observed differences in the rate of expansion. Individuals with the genotype associated with high flight metabolic rate were most frequent in eastern Finland, where the rate of range expansion has been highest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flood extent mapping is a basic tool for flood damage assessment, which can be done by digital classification techniques using satellite imageries, including the data recorded by radar and optical sensors. However, converting the data into the information we need is not a straightforward task. One of the great challenges involved in the data interpretation is to separate the permanent water bodies and flooding regions, including both the fully inundated areas and the wet areas where trees and houses are partly covered with water. This paper adopts the decision fusion technique to combine the mapping results from radar data and the NDVI data derived from optical data. An improved capacity in terms of identifying the permanent or semi-permanent water bodies from flood inundated areas has been achieved. Computer software tools Multispec and Matlab were used.