902 resultados para Dynamic search fireworks algorithm with covariance mutation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider active sampling to label pixels grouped with hierarchical clustering. The objective of the method is to match the data relationships discovered by the clustering algorithm with the user's desired class semantics. The first is represented as a complete tree to be pruned and the second is iteratively provided by the user. The active learning algorithm proposed searches the pruning of the tree that best matches the labels of the sampled points. By choosing the part of the tree to sample from according to current pruning's uncertainty, sampling is focused on most uncertain clusters. This way, large clusters for which the class membership is already fixed are no longer queried and sampling is focused on division of clusters showing mixed labels. The model is tested on a VHR image in a multiclass classification setting. The method clearly outperforms random sampling in a transductive setting, but cannot generalize to unseen data, since it aims at optimizing the classification of a given cluster structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM: The aim of this cross-sectional study was to provide normative data (ordinal scores and timed performances) for gross and fine motor tasks in typically developing children between 3 and 5 years of age using the Zurich Neuromotor Assessment (ZNA). METHOD: Typically developing children (n=101; 48 males, 53 females) between 3 and 5 years of age were enrolled from day-care centres in the greater Zurich area and tested using a modified version of the ZNA; the tests were recorded digitally on video. Intraobserver reliability was assessed on the videos of 20 children by one examiner. Interobserver reliability was assessed by two examiners. Test-retest reliability was performed on an additional 20 children. The modelling approach summarized the data with a linear age effect and an additive term for sex, while incorporating informative missing data in the normative values. Normative data for adaptive motor tasks, pure motor tasks, and static and dynamic balance were calculated with centile curves (for timed performance) and expected ordinal scores (for ordinal scales). RESULTS: Interobserver, intraobserver, and test-retest reliability of tasks were moderate to good. Nearly all tasks showed significant age effects, whereas sex was significant only for stringing beads and hopping on one leg. INTERPRETATION: These results indicate that timed performance and ordinal scales of neuromotor tasks can be reliably measured in preschool children and are characterized by developmental change and high interindividual variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An alternative relation to Pareto-dominance relation is proposed. The new relation is based on ranking a set of solutions according to each separate objective and an aggregation function to calculate a scalar fitness value for each solution. The relation is called as ranking-dominance and it tries to tackle the curse of dimensionality commonly observedin evolutionary multi-objective optimization. Ranking-dominance can beused to sort a set of solutions even for a large number of objectives when Pareto-dominance relation cannot distinguish solutions from one another anymore. This permits search to advance even with a large number of objectives. It is also shown that ranking-dominance does not violate Pareto-dominance. Results indicate that selection based on ranking-dominance is able to advance search towards the Pareto-front in some cases, where selection based on Pareto-dominance stagnates. However, in some cases it is also possible that search does not proceed into direction of Pareto-front because the ranking-dominance relation permits deterioration of individual objectives. Results also show that when the number of objectives increases, selection based on just Pareto-dominance without diversity maintenance is able to advance search better than with diversity maintenance. Therefore, diversity maintenance is connive at the curse of dimensionality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Päällystettyä paperia valmistettaessa syntyy päällystettyä hylkyä, joka kierrätetään takaisin prosessiin raaka-aineen tehokkaaksi hyödyntämiseksi. Hylyn mukana takaisin paperikoneen lyhyeen kiertoon päätyy myös pigmenttiaines levymäisinä partikkeleina. Nämä partikkelit rejektoituvat lyhyen kierron pyörrepuhdistimilla. Raaka-aineen hävikin pienentämiseksi käytetään lyhyessä kierrossa täyteaineen talteenottojärjestelmää, jonka tehtävänä on hienontaa päällystysainepartikkelit tasakokoisiksi, jotta ne voitaisiin palauttaa prosessiin. Talteenottolaitteistojen toiminnan tarkkailun kannalta on keskeistä tietää pyörrepuhdistuslaitoksen eri jakeiden partikkelikokojakaumat juuri pigmenttipartikkelien osalta. Tätä määritystä häiritsee näytteissä oleva kuitu. Tässä työssä pyrittiin löytämään partikkelikokoanalyysimenetelmä, jolla pigmenttien partikkelikokojakauma saataisiin selvitettyä kuidusta huolimatta. Aiemmin käytetty näytteen tuhkaus esikäsittelynä ennen partikkelikokoanalyysiä laserdiffraktiometrillä on osoittautunut toimimattomaksi. Kokeiden pääpaino keskittyi näytteen esikäsittelyyn fraktioinnilla ennen laserdiffraktioanalyysiä ja virtaussytometriamittauksiin. Fraktiointiin käytettiin DDJ-laitetta (dynamic drainage jar), joka oli varustettu metalliviiralla. Kumpikaan menetelmistä ei ollut täysin toimiva partikkelikokoanalyysiin, fraktioinnilla saadaan vähennettyä kuidun partikkelikokojakaumaan aiheuttamaa virhettä, mutta sen toimivuus riippuu paljolti näytteestä. Virtaussytometrialla väriainetta SYTO13 käyttämällä saadaan pigmenttipartikkelit tunnistettua ja näin rajattua kuidut pois mittauksista, mutta pigmenttiä ei saada erotettua puuperäisestä hienoaineesta, mikä vääristää mittaustulosta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen tarkoituksena on selvittää, mitä on dynaaminen organisaatio- viestintä, minkälaisesta maailmankuvasta se ponnistaa ja mitä yhtymä- kohtia sillä on tietojohtamisen kolmiulotteiseen organisaatiomalliin. Tutkimus on teoreettinen synteesi, jonka avulla on tarkoitus löytää myös käytännön menetelmiä, joilla voidaan tukea dynaamista organisaatio- viestintää, organisaation dynaamisuutta sekä itseuudistumista. Tutkimuksen teoreettisina lähtökohtina ovat Pekka Aulan dynaamisen organisaatioviestinnän teoria sekä pääasiassa Pirjo Ståhlen luoma kolmiulotteinen organisaatiomalli ja siihen liittyvä itseuudistuvien systeemien teoriaehdotus. Tutkimuksen käytännönläheisessä osassa analysoidaan suomalaisia organisaatioviestinnän oppikirjoja dynaamisen organisaatioviestinnän teorian valossa. Lopuksi muodostetaan malli, joka tukee dynaamista organisaatioviestintää ja organisaatiota. Tutkimus osoittaa, että useiden teoreetikoiden mielestä organisaatioiden pitäisi luopua vanhentuneista organisaatiorakenteistaan ja kehittää työympäristöjään nostamalla inhimilliset voimavarat ensisijaisiksi sekä keskittymällä työntekijöiden välisen vuorovaikutuksen, tiedon sekä informaation vaihtoon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Previous observations found a high prevalence of obstructive sleep apnea (OSA) in the hemodialysis population, but the best diagnostic approach remains undefined. We assessed OSA prevalence and performance of available screening tools to propose a specific diagnostic algorithm. Methods. 104 patients from 6 Swiss hemodialysis centers underwent polygraphy and completed 3 OSA screening scores: STOP-BANG, Berlin's Questionnaire, and Adjusted Neck Circumference. The OSA predictors were identified on a derivation population and used to develop the diagnostic algorithm, which was validated on an independent population. Results. We found 56% OSA prevalence (AHI ≥ 15/h), which was largely underdiagnosed. Screening scores showed poor performance for OSA screening (ROC areas 0.538 [SE 0.093] to 0.655 [SE 0.083]). Age, neck circumference, and time on renal replacement therapy were the best predictors of OSA and were used to develop a screening algorithm, with higher discriminatory performance than classical screening tools (ROC area 0.831 [0.066]). Conclusions. Our study confirms the high OSA prevalence and highlights the low diagnosis rate of this treatable cardiovascular risk factor in the hemodialysis population. Considering the poor performance of OSA screening tools, we propose and validate a specific algorithm to identify hemodialysis patients at risk for OSA for whom further sleep investigations should be considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose using the affinity propagation (AP) clustering algorithm for detecting multiple disjoint shoals, and we present an extension of AP, denoted by STAP, that can be applied to shoals that fusion and fission across time. STAP incorporates into AP a soft temporal constraint that takes cluster dynamics into account, encouraging partitions obtained at successive time steps to be consistent with each other. We explore how STAP performs under different settings of its parameters (strength of the temporal constraint, preferences, and distance metric) by applying the algorithm to simulated sequences of collective coordinated motion. We study the validity of STAP by comparing its results to partitioning of the same data obtained from human observers in a controlled experiment. We observe that, under specific circumstances, AP yields partitions that agree quite closely with the ones made by human observers. We conclude that using the STAP algorithm with appropriate parameter settings is an appealing approach for detecting shoal fusion-fission dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: We propose the use of a retrospectively gated cine fast spin echo (FSE) sequence for characterization of carotid artery dynamics. The aim of this study was to compare cine FSE measures of carotid dynamics with measures obtained on prospectively gated FSE images. METHODS: The common carotid arteries in 10 volunteers were imaged using two temporally resolved sequences: (i) cine FSE and (ii) prospectively gated FSE. Three raters manually traced a common carotid artery area for all cardiac phases on both sequences. Measured areas and systolic-diastolic area changes were calculated and compared. Inter- and intra-rater reliability were assessed for both sequences. RESULTS: No significant difference between cine FSE and prospectively gated FSE areas were observed (P = 0.36). Both sequences produced repeatable cross-sectional area measurements: inter-rater intraclass correlation coefficient (ICC) = 0.88 on cine FSE images and 0.87 on prospectively gated FSE images. Minimum detectable difference (MDD) in systolic-diastolic area was 4.9 mm(2) with cine FSE and 6.4 mm(2) with prospectively gated FSE. CONCLUSION: This cine FSE method produced repeatable dynamic carotid artery measurements with less artifact and greater temporal efficiency compared with prospectively gated FSE. Magn Reson Med 74:1103-1109, 2015. © 2014 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Previous observations found a high prevalence of obstructive sleep apnea (OSA) in the hemodialysis population, but the best diagnostic approach remains undefined. We assessed OSA prevalence and performance of available screening tools to propose a specific diagnostic algorithm. METHODS: 104 patients from 6 Swiss hemodialysis centers underwent polygraphy and completed 3 OSA screening scores: STOP-BANG, Berlin's Questionnaire, and Adjusted Neck Circumference. The OSA predictors were identified on a derivation population and used to develop the diagnostic algorithm, which was validated on an independent population. RESULTS: We found 56% OSA prevalence (AHI ≥ 15/h), which was largely underdiagnosed. Screening scores showed poor performance for OSA screening (ROC areas 0.538 [SE 0.093] to 0.655 [SE 0.083]). Age, neck circumference, and time on renal replacement therapy were the best predictors of OSA and were used to develop a screening algorithm, with higher discriminatory performance than classical screening tools (ROC area 0.831 [0.066]). CONCLUSIONS: Our study confirms the high OSA prevalence and highlights the low diagnosis rate of this treatable cardiovascular risk factor in the hemodialysis population. Considering the poor performance of OSA screening tools, we propose and validate a specific algorithm to identify hemodialysis patients at risk for OSA for whom further sleep investigations should be considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Six supercritical fluid extraction (SFE) methods were tested, by varying the following operational parameters: CO2 pressure, time and temperature of extraction, type and proportion of static modifier, and Hydromatrix®/sample rate into cell. Firstly, insecticide carbamates were extracted from spiked potatoes samples (fortification level of 0,5 mg.Kg-1) by using SPE procedures, and then final extracts were analyzed HPLC/fluorescence. Good performance was observed with SFE methods that operated with values of temperature and CO2 pressure of 50 ºC and 350 bar, respectively. Best efficiency was obtained when it was used acetonitrile as a modifier (3% on the cell volume), and Hydromatrix®/sample rate of 2:1. Static time was of 1 min; total extraction time was of 35 min; dynamic extraction was performed with 15 mL of CO2, and it was used methanol (2 mL) for the dissolution of the final residue. In such conditions, pesticide recoveries varied from 72 to 94%, depending on the analyzed compound. In higher extraction temperatures, a rapid degradation was observed for some compounds, such as aldicarb and carbaryl; presence of their metabolites was further confirmed by HPLC-APCI/MS in positive mode. Detection limits for chromatographic analysis varied from 0,2 to 1,3 ng.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of building mathematical models in quantitative structure-activity relationship (QSAR) studies is generally limited by the size of the dataset used to select variables from. For huge datasets, the task of selecting a given number of variables that produces the best linear model can be enormous, if not unfeasible. In this case, some methods can be used to separate good parameter combinations from the bad ones. In this paper three methodologies are analyzed: systematic search, genetic algorithm and chemometric methods. These methods have been exposed and discussed through practical examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is necessary to use highly specialized robots in ITER (International Thermonuclear Experimental Reactor) both in the manufacturing and maintenance of the reactor due to a demanding environment. The sectors of the ITER vacuum vessel (VV) require more stringent tolerances than normally expected for the size of the structure involved. VV consists of nine sectors that are to be welded together. The vacuum vessel has a toroidal chamber structure. The task of the designed robot is to carry the welding apparatus along a path with a stringent tolerance during the assembly operation. In addition to the initial vacuum vessel assembly, after a limited running period, sectors need to be replaced for repair. Mechanisms with closed-loop kinematic chains are used in the design of robots in this work. One version is a purely parallel manipulator and another is a hybrid manipulator where the parallel and serial structures are combined. Traditional industrial robots that generally have the links actuated in series are inherently not very rigid and have poor dynamic performance in high speed and high dynamic loading conditions. Compared with open chain manipulators, parallel manipulators have high stiffness, high accuracy and a high force/torque capacity in a reduced workspace. Parallel manipulators have a mechanical architecture where all of the links are connected to the base and to the end-effector of the robot. The purpose of this thesis is to develop special parallel robots for the assembly, machining and repairing of the VV of the ITER. The process of the assembly and machining of the vacuum vessel needs a special robot. By studying the structure of the vacuum vessel, two novel parallel robots were designed and built; they have six and ten degrees of freedom driven by hydraulic cylinders and electrical servo motors. Kinematic models for the proposed robots were defined and two prototypes built. Experiments for machine cutting and laser welding with the 6-DOF robot were carried out. It was demonstrated that the parallel robots are capable of holding all necessary machining tools and welding end-effectors in all positions accurately and stably inside the vacuum vessel sector. The kinematic models appeared to be complex especially in the case of the 10-DOF robot because of its redundant structure. Multibody dynamics simulations were carried out, ensuring sufficient stiffness during the robot motion. The entire design and testing processes of the robots appeared to be complex tasks due to the high specialization of the manufacturing technology needed in the ITER reactor, while the results demonstrate the applicability of the proposed solutions quite well. The results offer not only devices but also a methodology for the assembly and repair of ITER by means of parallel robots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A 1µs Molecular Dynamic simulation was performed with a realistic model system of Sodium Dodecyl Sulfate (SDS) micelles in aqueous solution, comprising of 360 DS-, 360 Na+ and 90000 water particles. After 300 ns three different micellar shapes and sizes 41, 68 and 95 monomers, were observed. The process led to stabilization in the total number of SDS clusters and an increase in the micellar radius to 2.23 nm, in agreement with experimental results. An important conclusion, is be aware that simulations employed in one aggregate, should be considered as a constraint. Size and shape distribution must be analyzed.