866 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
Single-trial analysis of human electroencephalography (EEG) has been recently proposed for better understanding the contribution of individual subjects to a group-analysis effect as well as for investigating single-subject mechanisms. Independent Component Analysis (ICA) has been repeatedly applied to concatenated single-trial responses and at a single-subject level in order to extract those components that resemble activities of interest. More recently we have proposed a single-trial method based on topographic maps that determines which voltage configurations are reliably observed at the event-related potential (ERP) level taking advantage of repetitions across trials. Here, we investigated the correspondence between the maps obtained by ICA versus the topographies that we obtained by the single-trial clustering algorithm that best explained the variance of the ERP. To do this, we used exemplar data provided from the EEGLAB website that are based on a dataset from a visual target detection task. We show there to be robust correspondence both at the level of the activation time courses and at the level of voltage configurations of a subset of relevant maps. We additionally show the estimated inverse solution (based on low-resolution electromagnetic tomography) of two corresponding maps occurring at approximately 300 ms post-stimulus onset, as estimated by the two aforementioned approaches. The spatial distribution of the estimated sources significantly correlated and had in common a right parietal activation within Brodmann's Area (BA) 40. Despite their differences in terms of theoretical bases, the consistency between the results of these two approaches shows that their underlying assumptions are indeed compatible.
Resumo:
We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.
Resumo:
The objective of this work was to verify the existence of a lethal locus in a eucalyptus hybrid population, and to quantify the segregation distortion in the linkage group 3 of the Eucalyptus genome. A E. grandis x E. urophylla hybrid population, which segregates for rust resistance, was genotyped with 19 microsatellite markers belonging to linkage group 3 of the Eucalyptus genome. To quantify the segregation distortion, maximum likelihood (ML) models, specific to outbreeding populations, were used. These models consider the observed marker genotypes and the lethal locus viability as parameters. The ML solutions were obtained using the expectation‑maximization algorithm. A lethal locus in the linkage group 3 was verified and mapped, with high confidence, between the microssatellites EMBRA 189 e EMBRA 122. This lethal locus causes an intense gametic selection from the male side. Its map position is 25 cM from the locus which controls the rust resistance in this population.
Resumo:
Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.
Resumo:
Identifiability of the so-called ω-slice algorithm is proven for ARMA linear systems. Although proofs were developed in the past for the simpler cases of MA and AR models, they were not extendible to general exponential linear systems. The results presented in this paper demonstrate a unique feature of the ω-slice method, which is unbiasedness and consistency when order is overdetermined, regardless of the IIR or FIR nature of the underlying system, and numerical robustness.
Resumo:
Localization, which is the ability of a mobile robot to estimate its position within its environment, is a key capability for autonomous operation of any mobile robot. This thesis presents a system for indoor coarse and global localization of a mobile robot based on visual information. The system is based on image matching and uses SIFT features as natural landmarks. Features extracted from training images arestored in a database for use in localization later. During localization an image of the scene is captured using the on-board camera of the robot, features are extracted from the image and the best match is searched from the database. Feature matching is done using the k-d tree algorithm. Experimental results showed that localization accuracy increases with the number of training features used in the training database, while, on the other hand, increasing number of features tended to have a negative impact on the computational time. For some parts of the environment the error rate was relatively high due to a strong correlation of features taken from those places across the environment.
Resumo:
The main objective of this study was todo a statistical analysis of ecological type from optical satellite data, using Tipping's sparse Bayesian algorithm. This thesis uses "the Relevence Vector Machine" algorithm in ecological classification betweenforestland and wetland. Further this bi-classification technique was used to do classification of many other different species of trees and produces hierarchical classification of entire subclasses given as a target class. Also, we carried out an attempt to use airborne image of same forest area. Combining it with image analysis, using different image processing operation, we tried to extract good features and later used them to perform classification of forestland and wetland.
Resumo:
Tämän diplomityön tarkoituksena on tutkia, mitä vaaditaan uutisten samanlaisuuden automaattiseen tunnistamiseen. Uutiset ovat tekstipohjaisia uutisia, jotka on haettu eri uutislähteistä. Uutisista on tarkoitus tunnistaa ensinnäkin ne uutiset, jotka tarkoittavat samaa asiaa, sekä ne uutiset, jotka eivät ole aivan sama asia, mutta liittyvät kuitenkin toisiinsa. Tässä diplomityössä tutkitaan, millä algoritmeilla tämä tunnistus onnistuu tehokkaimmin sekä suomalaisessa, että englanninkielisessä tekstissä. Diplomityössä vertaillaan valmiita algoritmeja. Tavoitteena on valita sellainen algoritmiyhdistelmä, että 90 % vertailluista uutisista tunnistuu oikein. Tutkimuksessa käytetään 2 eri ryhmittelyalgoritmia, sekä 3 eri stemmaus-algoritmia. Näitä algoritmeja vertaillaan sekä uutisten tunnistustehokkuuden, että niiden suorituskyvyn suhteen. Parhaimmaksi stemmaus-algoritmiksi osoittautui sekä suomen-, että englanninkielisten uutisten vertailussa Porterin algoritmi. Ryhmittely-algoritmeista tehokkaammaksi osoittautui yksinkertaisempi erilaisiin tunnuslukuihin perustuva algoritmi.
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.
Resumo:
The objective of this work was to introduce the emerging non-contacting spray coating process and compare it to the existing coating techniques. Particular emphasis was given to the details of the spraying process of paper coating colour and the base paper requirements set by the new coating method. Spraying technology itself is nothing new, but the atomisation process of paper coating colour is quite unknown to the paper industry. The differences between the rheology of painting and coating colours make it very difficult to utilise the existing information from spray painting research. Based on the trials, some basic conclusion can be made:The results of this study suggest that the Brookfield viscosity of spray coating colour should be as low as possible, presently a 50 mPas level is regarded as an optimum. For the paper quality and coater runnability, the solids level should be as high as possible. However, the graininess of coated paper surface and the nozzle wear limits the maximum solids level to 60 % at the moment. Most likelydue to the low solids and low viscosity of the coating colour the low shear Brookfield viscosity correlates very well with the paper and spray fan qualities. High shear viscosity is also important, but yet less significant than the low shear viscosity. Droplet size should be minimized and besides keeping the brrokfield viscosity low that can be helped by using a surfactant or dispersing agent in the coating colour formula. Increasing the spraying pressure in the nozzle can also reduce the droplet size. The small droplet size also improves the coating coverage, since there is hardly any levelling taking place after the impact with the base paper. Because of the lack of shear forces after the application, the pigment particles do not orientate along the paper surface. Therefore the study indicates that based on the present know-how, no quality improvements can be obtained by the use of platy type of pigments. The other disadvantage of them is the rapid deterioration of the nozzle lifetime. Further research in both coating colour rheology and nozzle design may change this in the future, but so far only round shape pigments, like typically calcium carbonate is, can be used with spray coating. The low water retention characteristics of spray coating, enhanced by the low solids and low viscosity, challenge the base paper absorption properties.Filler level has to be low not to increase the number of small pores, which have a great influence on the absorption properties of the base paper. Hydrophobic sizing reduces this absorption and prevents binder migration efficiently. High surface roughness and especially poor formation of the base paper deteriorate thespray coated paper properties. However, pre-calendering of the base paper does not contribute anything to the finished paper quality, at least at the coating colour solids level below 60 %. When targeting a standard offset LWC grade, spraycoating produces similar quality to film coating, but yet blade coating being on a slightly better level. However, because of the savings in both investment and production costs, spray coating may have an excellent future ahead. The porousnature of the spray coated surface offers an optimum substrate for the coldset printing industry to utilise the potential of high quality papers in their business.
Resumo:
This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.
Resumo:
Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.
Resumo:
Our efforts are directed towards the understanding of the coscheduling mechanism in a NOW system when a parallel job is executed jointly with local workloads, balancing parallel performance against the local interactive response. Explicit and implicit coscheduling techniques in a PVM-Linux NOW (or cluster) have been implemented. Furthermore, dynamic coscheduling remains an open question when parallel jobs are executed in a non-dedicated Cluster. A basis model for dynamic coscheduling in Cluster systems is presented in this paper. Also, one dynamic coscheduling algorithm for this model is proposed. The applicability of this algorithm has been proved and its performance analyzed by simulation. Finally, a new tool (named Monito) for monitoring the different queues of messages in such an environments is presented. The main aim of implementing this facility is to provide a mean of capturing the bottlenecks and overheads of the communication system in a PVM-Linux cluster.
Resumo:
Orange fruits from two seasons, in April and August 2006 representing late 2005 and early 2006 harvests respectively were cured in hot air at 36-37(0)C to 1%, 3%, 5% and 7% weight loss before storage at 28(0)C and 86% relative humidity (RH). The fruits were observed for incidence of decay, further weight loss, juice content, firmness or softening of the peel, total soluble solids (TSS), pH, titratable acidity, and colour during storage. Curing reduced the incidence of decay. All control fruits were rotten by day 21 in August harvest while 22.5% of the control was rotten by day 56 in the April harvest. Storage life was extended beyond 56 days in fruits cured with 1, 3, 5 and 7% in April harvest as there was no decay throughout, while decay incidence in August harvest was 88.9, 61.1, 22.2 and 31.3% in 1, 3, 5 and 7% respectively. Penicillium digitatum, Phytophthora sp., Alternaria citri and Collectotrichum gloeosporioides were among decay causing moulds detected. Control fruits lost more weight during storage than cured fruits did. Fruit rind hardening was more noticed in the control and those cured to 1% weight loss, especially from the April harvest. It was insignificant in other treatments in both trials. Titratable acidity, pH, juice content and TSS were not affected by the treatment. Colour change to yellow was however retarded by curing. Curing to 5% weight loss was best for decay control and quality retention.
Resumo:
Monte Carlo (MC) simulations have been used to study the structure of an intermediate thermal phase of poly(R-octadecyl ç,D-glutamate). This is a comblike poly(ç-peptide) able to adopt a biphasic structure that has been described as a layered arrangement of backbone helical rods immersed in a paraffinic pool of polymethylene side chains. Simulations were performed at two different temperatures (348 and 363 K), both of them above the melting point of the paraffinic phase, using the configurational bias MC algorithm. Results indicate that layers are constituted by a side-by-side packing of 17/5 helices. The organization of the interlayer paraffinic region is described in atomistic terms by examining the torsional angles and the end-to-end distances for the octadecyl side chains. Comparison with previously reported comblike poly(â-peptide)s revealed significant differences in the organization of the alkyl side chains.