866 resultados para the Fuzzy Colour Segmentation Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To investigate potential abnormalities in subcortical brain structures in conversion disorder (CD) compared with controls using a region of interest (ROI) approach. METHODS: Fourteen patients with motor CD were compared with 31 healthy controls using high-resolution MRI scans with an ROI approach focusing on the basal ganglia, thalamus and amygdala. Brain volumes were measured using Freesurfer, a validated segmentation algorithm. RESULTS: Significantly smaller left thalamic volumes were found in patients compared with controls when corrected for intracranial volume. These reductions did not vary with handedness, laterality, duration or severity of symptoms. CONCLUSIONS: These differences may reflect a primary disease process in this area or be secondary effects of the disorder, for example, resulting from limb disuse. Larger, longitudinal structural imaging studies will be required to confirm the findings and explore whether they are primary or secondary to CD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General clustering deals with weighted objects and fuzzy memberships. We investigate the group- or object-aggregation-invariance properties possessed by the relevant functionals (effective number of groups or objects, centroids, dispersion, mutual object-group information, etc.). The classical squared Euclidean case can be generalized to non-Euclidean distances, as well as to non-linear transformations of the memberships, yielding the c-means clustering algorithm as well as two presumably new procedures, the convex and pairwise convex clustering. Cluster stability and aggregation-invariance of the optimal memberships associated to the various clustering schemes are examined as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the theory of hidden Markov models (HMM) isapplied to the problem of blind (without training sequences) channel estimationand data detection. Within a HMM framework, the Baum–Welch(BW) identification algorithm is frequently used to find out maximum-likelihood (ML) estimates of the corresponding model. However, such a procedureassumes the model (i.e., the channel response) to be static throughoutthe observation sequence. By means of introducing a parametric model fortime-varying channel responses, a version of the algorithm, which is moreappropriate for mobile channels [time-dependent Baum-Welch (TDBW)] isderived. Aiming to compare algorithm behavior, a set of computer simulationsfor a GSM scenario is provided. Results indicate that, in comparisonto other Baum–Welch (BW) versions of the algorithm, the TDBW approachattains a remarkable enhancement in performance. For that purpose, onlya moderate increase in computational complexity is needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pigmenttipäällystyksen tarkoituksena on parantaa painopapereiden pintaominaisuuksia. Tämän työn tarkoituksena oli löytää sopiva päällystyspasta päällystetylle coldset-paperille. Kirjallisuusosassa on käsitelty coldset-painatusta ja sen ongelmia. Päällystysmenetelmän perusteita, pastan ominaisuuksia ja niiden vaikutusta päällystystulokseen on myös käsitelty. Lisäksi on esitelty joitakin päällystetyn paperin pinnantutkimusmenetelmiä. Kokeellisessa osassa on tutkittu erilaisten pastakoostumusten ja päällystemäärien sekä kalanteroinnin vaikutusta paperin painettavuuteen. Paperit on päällystetty Helicoaterilla ja joitakin pastoja on testattu myös pilot-mittakaavaisessa päällystyksessä. Selitystä paperin käyttäytymiseen painatuksessa on etsitty päällystetyn paperin pintarakenteesta. Paras painettavuus saavutetaan päällysteellä, jossa pigmenttinä on vain karbonaatti. Painojälkeä voidaan parantaa käyttämällä kalsinoitua kaoliinia yhdessä karbonaatin kanssa, mutta tämän päällysteen pintalujuus ei ole riittävä CSWO-painatukseen. Tärkkipigmentti parantaa veden ja painovärin absorptiota ja siten tekee painetun tuotteen kuivemmaksi ja miellyttävämmän tuntuiseksi, mutta aiheuttaa smearingia. Tämä johtuu liian nopeasta musteen asettuvuudesta. "Pehmeä" SB-lateksi soveltuu paremmin offset-painatukseen kuin "kova" lateksi, joka sisältää myös PVAc:ta. "Pehmeällä" lateksilla saadaan parempi pintalujuus ja painojälki kuin "kovalla" lateksilla. Paperin pölyävyyttä painatuksessa voidaan vähentää nostamalla päällystemäärää ja laskemalla pastan kuiva-ainepitoisuutta. Kalanteroinnilla ei pintalujuutta tai painojälkeä voida parantaa. Selitys tutkimuksessa käsiteltyjen papereiden painojäljelle ja painettavuudelle löydetään tutkimalla päällysteen pintarakennetta. Painojälkeen vaikuttaa eniten päällysteen peittoaste. Huonoa peittävyyttä voidaan parantaa nostamalla päällystemäärää. Pölyäminen painatuksessa johtuu pigmenteistä, jotka eivät ole sidottuja paperin pintaan. Tämä taas johtuu pastan huonosta vesiretentiosta. Hyödyllisintä tietoa näiden papereiden pintarakenteesta saadaan tutkimalla pintaa pyyhkäisyelektonimikroskoopilla (SEM), atomivoimamikroskoopilla (AFM) ja laserindusoidulla plasmaspektrometrilla (LIPS). LIPSin etuna on se, että päällystemääräjaukauma voidaan määrittää sekä x-y- että z-suunnassa samanaikaisesti samasta kohdasta. LIPSissä myös näytteen preparointitarve on hyvin vähäinen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Segmentointi on strateginen työkalu, joka tehostaa yrityksen resurssien käyttöä ja siten vaikuttaa kaikkiin asiakkuuksiin liittyviin liiketoimintaprosesseihin. Työn tavoitteena oli muodostaa segmentointimalli (sisältää sekä segmentointiprosessin että kriteerit) yritysinternetmarkkinoille. Työn tuloksia voidaan kuitenkin tulkita ja soveltaa laajemmin korkean teknologian yrityspalvelumarkkinoille. Tämä tutkielma lisää tietämystämme ja tarjoaa uudenlaisen näkemyksen segmentointiin korkean teknologian yrityspalvelumarkkinoilla. Työssä kuvataan korkean teknologian ja yritys- sekä palvelumarkkinoinnin erityispiirteitä ja kuinka nämä tekijät vaikuttavat segmentointimallin. Tutkimuksessa selvitettiin kohdeyrityksen nykyiset segmentointikäytännöt henkilökohtaisin asiantuntijahaastatteluin. Haastatteluiden avulla luotiin kuva nykyisistä lähestymistavoista sekä niiden lähtökohdista, vahvuuksista ja haasteista. Haastatteluiden analysoinnin jälkeen perustettiin projekti segmentoinnin kehittämiseksi. Työ tuloksena luotiin segmentointimalli, joka tarjoaa vankan perustan segmentoinnin kehittämiselle jatkuvana prosessina. Työssä esitetään segmentoinnin integroimista yrityksen asiakkuuksiin liittyviin liiketoimintaprosesseihin, joka usein puuttuu aiemmista töistä, sekä informaationkulun tehostamista segmentoinnin hyödyntämiseksi tehokkaammin. Segmentointi on strateginen työkalu ja vaatii siksi ylemmän johdon tuen ja sitoutumisen. Oikein sovellettuna segmentointi tarjoaa liiketoiminnalle mahdollisuuden merkittäviin etuihin kuten asiakastyytyväisyyden ja kannattavuuden kehittämiseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markkinasegmentointi nousi esiin ensi kerran jo 50-luvulla ja se on ollut siitä lähtien yksi markkinoinnin peruskäsitteistä. Suuri osa segmentointia käsittelevästä tutkimuksesta on kuitenkin keskittynyt kuluttajamarkkinoiden segmentointiin yritys- ja teollisuusmarkkinoiden segmentoinnin jäädessä vähemmälle huomiolle. Tämän tutkimuksen tavoitteena on luoda segmentointimalli teollismarkkinoille tietotekniikan tuotteiden ja palveluiden tarjoajan näkökulmasta. Tarkoituksena on selvittää mahdollistavatko case-yrityksen nykyiset asiakastietokannat tehokkaan segmentoinnin, selvittää sopivat segmentointikriteerit sekä arvioida tulisiko tietokantoja kehittää ja kuinka niitä tulisi kehittää tehokkaamman segmentoinnin mahdollistamiseksi. Tarkoitus on luoda yksi malli eri liiketoimintayksiköille yhteisesti. Näin ollen eri yksiköiden tavoitteet tulee ottaa huomioon eturistiriitojen välttämiseksi. Tutkimusmetodologia on tapaustutkimus. Lähteinä tutkimuksessa käytettiin sekundäärisiä lähteitä sekä primäärejä lähteitä kuten case-yrityksen omia tietokantoja sekä haastatteluita. Tutkimuksen lähtökohtana oli tutkimusongelma: Voiko tietokantoihin perustuvaa segmentointia käyttää kannattavaan asiakassuhdejohtamiseen PK-yritys sektorilla? Tavoitteena on luoda segmentointimalli, joka hyödyntää tietokannoissa olevia tietoja tinkimättä kuitenkaan tehokkaan ja kannattavan segmentoinnin ehdoista. Teoriaosa tutkii segmentointia yleensä painottuen kuitenkin teolliseen markkinasegmentointiin. Tarkoituksena on luoda selkeä kuva erilaisista lähestymistavoista aiheeseen ja syventää näkemystä tärkeimpien teorioiden osalta. Tietokantojen analysointi osoitti selviä puutteita asiakastiedoissa. Peruskontaktitiedot löytyvät mutta segmentointia varten tietoa on erittäin rajoitetusti. Tietojen saantia jälleenmyyjiltä ja tukkureilta tulisi parantaa loppuasiakastietojen saannin takia. Segmentointi nykyisten tietojen varassa perustuu lähinnä sekundäärisiin tietoihin kuten toimialaan ja yrityskokoon. Näitäkään tietoja ei ole saatavilla kaikkien tietokannassa olevien yritysten kohdalta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the challenges of treating polarization and covalent interactions in docking by developing a hybrid quantum mechanical/molecular mechanical (QM/MM) scoring function based on the semiempirical self-consistent charge density functional tight-binding (SCC-DFTB) method and the CHARMM force field. To benchmark this scoring function within the EADock DSS docking algorithm, we created a publicly available dataset of high-quality X-ray structures of zinc metalloproteins ( http://www.molecular-modelling.ch/resources.php ). For zinc-bound ligands (226 complexes), the QM/MM scoring yielded a substantially improved success rate compared to the classical scoring function (77.0% vs 61.5%), while, for allosteric ligands (55 complexes), the success rate remained constant (49.1%). The QM/MM scoring significantly improved the detection of correct zinc-binding geometries and improved the docking success rate by more than 20% for several important drug targets. The performance of both the classical and the QM/MM scoring functions compare favorably to the performance of AutoDock4, AutoDock4Zn, and AutoDock Vina.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addresses the problem of computing the minimal and maximal diameter of the Cayley graph of Coxeter groups. We first present and assert relevant parts of polytope theory and related Coxeter theory. After this, a method of contracting the orthogonal projections of a polytope from Rd onto R2 and R3, d ¸ 3 is presented. This method is the Equality Set Projection algorithm that requires a constant number of linearprogramming problems per facet of the projection in the absence of degeneracy. The ESP algorithm allows us to compute also projected geometric diameters of high-dimensional polytopes. A representation set of projected polytopes is presented to illustrate the methods adopted in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pyrographically decorated gourd, dated to the French Revolution period, has been alleged to contain a handkerchief dipped into the blood of the French king Louis XVI (1754-1793) after his beheading but recent analyses of living males from two Bourbon branches cast doubts on its authenticity. We sequenced the complete genome of the DNA contained in the gourd at low coverage (similar to 2.5x) with coding sequences enriched at a higher similar to 7.3x coverage. We found that the ancestry of the gourd's genome does not seem compatible with Louis XVI's known ancestry. From a functional perspective, we did not find an excess of alleles contributing to height despite being described as the tallest person in Court. In addition, the eye colour prediction supported brown eyes, while Louis XVI had blue eyes. This is the first draft genome generated from a person who lived in a recent historical period; however, our results suggest that this sample may not correspond to the alleged king.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We adapt the Shout and Act algorithm to Digital Objects Preservation where agents explore file systems looking for digital objects to be preserved (victims). When they find something they “shout” so that agent mates can hear it. The louder the shout, the urgent or most important the finding is. Louder shouts can also refer to closeness. We perform several experiments to show that this system works very scalably, showing that heterogeneous teams of agents outperform homogeneous ones over a wide range of tasks complexity. The target at-risk documents are MS Office documents (including an RTF file) with Excel content or in Excel format. Thus, an interesting conclusion from the experiments is that fewer heterogeneous (varying skills) agents can equal the performance of many homogeneous (combined super-skilled) agents, implying significant performance increases with lower overall cost growth. Our results impact the design of Digital Objects Preservation teams: a properly designed combination of heterogeneous teams is cheaper and more scalable when confronted with uncertain maps of digital objects that need to be preserved. A cost pyramid is proposed for engineers to use for modeling the most effective agent combinations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As wireless communications evolve towards heterogeneousnetworks, mobile terminals have been enabled tohandover seamlessly from one network to another. At the sametime, the continuous increase in the terminal power consumptionhas resulted in an ever-decreasing battery lifetime. To that end,the network selection is expected to play a key role on howto minimize the energy consumption, and thus to extend theterminal lifetime. Hitherto, terminals select the network thatprovides the highest received power. However, it has been provedthat this solution does not provide the highest energy efficiency.Thus, this paper proposes an energy efficient vertical handoveralgorithm that selects the most energy efficient network thatminimizes the uplink power consumption. The performance of theproposed algorithm is evaluated through extensive simulationsand it is shown to achieve high energy efficiency gains comparedto the conventional approach.