945 resultados para Density-based Scanning Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bandwidth-delay constrained least-cost multicast routing is a typical NP-complete problem. Although some swarm-based intelligent algorithms (e.g., genetic algorithm (GA)) are proposed to solve this problem, the shortcomings of local search affect the computational effectiveness. Taking the ability of building a robust network of Physarum network model (PN), a new hybrid algorithm, Physarum network-based genetic algorithm (named as PNGA), is proposed in this paper. In PNGA, an updating strategy based on PN is used for improving the crossover operator of traditional GA, in which the same parts of parent chromosomes are reserved and the new offspring by the Physarum network model is generated. In order to estimate the effectiveness of our proposed optimized strategy, some typical genetic algorithms and the proposed PNGA are compared for solving multicast routing. The experiments show that PNGA has more efficient than original GA. More importantly, the PNGA is more robustness that is very important for solving the multicast routing problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In the Arkhangelsk region of Northern Russia, multidrug-resistant (MDR) tuberculosis (TB) rates in new cases are amongst the highest in the world. In 2014, MDR-TB rates reached 31.7% among new cases and 56.9% among retreatment cases. The development of new diagnostic tools allows for faster detection of both TB and MDR-TB and should lead to reduced transmission by earlier initiation of anti-TB therapy. STUDY AIM: The PROVE-IT (Policy Relevant Outcomes from Validating Evidence on Impact) Russia study aimed to assess the impact of the implementation of line probe assay (LPA) as part of an LPA-based diagnostic algorithm for patients with presumptive MDR-TB focusing on time to treatment initiation with time from first-care seeking visit to the initiation of MDR-TB treatment rather than diagnostic accuracy as the primary outcome, and to assess treatment outcomes. We hypothesized that the implementation of LPA would result in faster time to treatment initiation and better treatment outcomes.

METHODS: A culture-based diagnostic algorithm used prior to LPA implementation was compared to an LPA-based algorithm that replaced BacTAlert and Löwenstein Jensen (LJ) for drug sensitivity testing. A total of 295 MDR-TB patients were included in the study, 163 diagnosed with the culture-based algorithm, 132 with the LPA-based algorithm.

RESULTS: Among smear positive patients, the implementation of the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 50 and 66 days compared to the culture-based algorithm (BacTAlert and LJ respectively, p<0.001). In smear negative patients, the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 78 days when compared to the culture-based algorithm (LJ, p<0.001). However, several weeks were still needed for treatment initiation in LPA-based algorithm, 24 days in smear positive, and 62 days in smear negative patients. Overall treatment outcomes were better in LPA-based algorithm compared to culture-based algorithm (p = 0.003). Treatment success rates at 20 months of treatment were higher in patients diagnosed with the LPA-based algorithm (65.2%) as compared to those diagnosed with the culture-based algorithm (44.8%). Mortality was also lower in the LPA-based algorithm group (7.6%) compared to the culture-based algorithm group (15.9%). There was no statistically significant difference in smear and culture conversion rates between the two algorithms.

CONCLUSION: The results of the study suggest that the introduction of LPA leads to faster time to MDR diagnosis and earlier treatment initiation as well as better treatment outcomes for patients with MDR-TB. These findings also highlight the need for further improvements within the health system to reduce both patient and diagnostic delays to truly optimize the impact of new, rapid diagnostics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scientific workflow offers a framework for cooperation between remote and shared resources on a grid computing environment (GCE) for scientific discovery. One major function of scientific workflow is to schedule a collection of computational subtasks in well-defined orders for efficient outputs by estimating task duration at runtime. In this paper, we propose a novel time computation model based on algorithm complexity (termed as TCMAC model) for high-level data intensive scientific workflow design. The proposed model schedules the subtasks based on their durations and the complexities of participant algorithms. Characterized by utilization of task duration computation function for time efficiency, the TCMAC model has three features for a full-aspect scientific workflow including both dataflow and control-flow: (1) provides flexible and reusable task duration functions in GCE;(2) facilitates better parallelism in iteration structures for providing more precise task durations;and (3) accommodates dynamic task durations for rescheduling in selective structures of control flow. We will also present theories and examples in scientific workflows to show the efficiency of the TCMAC model, especially for control-flow. Copyright©2009 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wide field-of-view (FOV) microscopy is of high importance to biological research and clinical diagnosis where a high-throughput screening of samples is needed. This thesis presents the development of several novel wide FOV imaging technologies and demonstrates their capabilities in longitudinal imaging of living organisms, on the scale of viral plaques to live cells and tissues.

The ePetri Dish is a wide FOV on-chip bright-field microscope. Here we applied an ePetri platform for plaque analysis of murine norovirus 1 (MNV-1). The ePetri offers the ability to dynamically track plaques at the individual cell death event level over a wide FOV of 6 mm × 4 mm at 30 min intervals. A density-based clustering algorithm is used to analyze the spatial-temporal distribution of cell death events to identify plaques at their earliest stages. We also demonstrate the capabilities of the ePetri in viral titer count and dynamically monitoring plaque formation, growth, and the influence of antiviral drugs.

We developed another wide FOV imaging technique, the Talbot microscope, for the fluorescence imaging of live cells. The Talbot microscope takes advantage of the Talbot effect and can generate a focal spot array to scan the fluorescence samples directly on-chip. It has a resolution of 1.2 μm and a FOV of ~13 mm2. We further upgraded the Talbot microscope for the long-term time-lapse fluorescence imaging of live cell cultures, and analyzed the cells’ dynamic response to an anticancer drug.

We present two wide FOV endoscopes for tissue imaging, named the AnCam and the PanCam. The AnCam is based on the contact image sensor (CIS) technology, and can scan the whole anal canal within 10 seconds with a resolution of 89 μm, a maximum FOV of 100 mm × 120 mm, and a depth-of-field (DOF) of 0.65 mm. We also demonstrate the performance of the AnCam in whole anal canal imaging in both animal models and real patients. In addition to this, the PanCam is based on a smartphone platform integrated with a panoramic annular lens (PAL), and can capture a FOV of 18 mm × 120 mm in a single shot with a resolution of 100─140 μm. In this work we demonstrate the PanCam’s performance in imaging a stained tissue sample.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

根据基因表达数据的特点,提出一种高精度的基于密度的聚类算法DENGENE.DENGENE通过定义一致性检测和引进峰点改进搜索方向,使得算法能够更好地处理基因表达数据.为了评价算法的性能,选取了两组广为使用的测试数据,即啤酒酵母基因表达数据集对算法来进行测试.实验结果表明,与基于模型的五种算法、CAST算法、K-均值聚类等相比,DENGENE在滤除噪声和聚类精度方面取得了显著的改善.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rock mass characterization requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in Light Detection and Ranging (LiDAR) instrumentation currently allow quick and accurate 3D data acquisition, yielding on the development of new methodologies for the automatic characterization of rock mass discontinuities. This paper presents a methodology for the identification and analysis of flat surfaces outcropping in a rocky slope using the 3D data obtained with LiDAR. This method identifies and defines the algebraic equations of the different planes of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test, finding principal orientations by Kernel Density Estimation and identifying clusters by the Density-Based Scan Algorithm with Noise. Different sources of information —synthetic and 3D scanned data— were employed, performing a complete sensitivity analysis of the parameters in order to identify the optimal value of the variables of the proposed method. In addition, raw source files and obtained results are freely provided in order to allow to a more straightforward method comparison aiming to a more reproducible research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determining suitable bus-stop locations is critical in improving the quality of bus services. Previous studies on selecting bus stop locations mainly consider environmental factors such as population density and traffic conditions, seldom of them consider the travel patterns of people, which is a key factor for determining bus-stop locations. In order to draw people’s travel patterns, this paper improves the density-based spatial clustering of applications with noise (DBSCAN) algorithm to find hot pick-up and drop-off locations based on taxi GPS data. The discovered density-based hot locations could be regarded as the candidate for bus-stop locations. This paper further utilizes the improved DBSCAN algorithm, namely as C-DBSCAN in this paper, to discover candidate bus-stop locations to Capital International Airport in Beijing based on taxi GPS data in November 2012. Finally, this paper discusses the effects of key parameters in C-DBSCAN algorithm on the clustering results. Keywords Bus-stop locations, Public transport service, Taxi GPS data, Centralize density-based spatial clustering of applications with noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The K-means algorithm is one of the most popular techniques in clustering. Nevertheless, the performance of the K-means algorithm depends highly on initial cluster centers and converges to local minima. This paper proposes a hybrid evolutionary programming based clustering algorithm, called PSO-SA, by combining particle swarm optimization (PSO) and simulated annealing (SA). The basic idea is to search around the global solution by SA and to increase the information exchange among particles using a mutation operator to escape local optima. Three datasets, Iris, Wisconsin Breast Cancer, and Ripley’s Glass, have been considered to show the effectiveness of the proposed clustering algorithm in providing optimal clusters. The simulation results show that the PSO-SA clustering algorithm not only has a better response but also converges more quickly than the K-means, PSO, and SA algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase in data center dependent services has made energy optimization of data centers one of the most exigent challenges in today's Information Age. The necessity of green and energy-efficient measures is very high for reducing carbon footprint and exorbitant energy costs. However, inefficient application management of data centers results in high energy consumption and low resource utilization efficiency. Unfortunately, in most cases, deploying an energy-efficient application management solution inevitably degrades the resource utilization efficiency of the data centers. To address this problem, a Penalty-based Genetic Algorithm (GA) is presented in this paper to solve a defined profile-based application assignment problem whilst maintaining a trade-off between the power consumption performance and resource utilization performance. Case studies show that the penalty-based GA is highly scalable and provides 16% to 32% better solutions than a greedy algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An FFT-based two-step phase-shifting (TPS) algorithm is described in detail and implemented by use of experimental interferograms. This algorithm has been proposed to solve the TPS problem with random phase shift except pi. By comparison with the visibility-function-based TPS algorithm, it proves that the FFT-based algorithm has obvious advantages in phase extracting. Meanwhile, we present a pi-phase-shift supplement to the TPS algorithm, which combines the two interferograms and demodulates the phase map by locating the extrema of the combined fringes after removing the respective backgrounds. So combining this method and FFT-based one, one could really implement the TPS with random phase shift. Whereafter, we systematically compare the TPS with single-interferogram analysis algorithm and conventional three-step phase-shifting one. The results demonstrate that the FFT-based TPS algorithm has a satisfactory accuracy. At last, based on the polarizing interferometry, a schematic setup of two-channel TPS interferometer with random phase shift is suggested to implement the simultaneous collection of interferograms. (c) 2007 Elsevier GrnbH. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. The algorithm is based upon a Support Vector Machine (SVM) approach to solving inverse operator problems. The algorithm is implemented and tested on simulated data from different distributions and different dimensionalities, gaussians and laplacians in $R^2$ and $R^{12}$. A comparison in performance is made with Gaussian Mixture Models (GMMs). Our algorithm does as well or better than the GMMs for the simulations tested and has the added advantage of being automated with respect to parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The next generation of wireless networks is envisioned as convergence of heterogeneous radio access networks. Since technologies are becoming more collaborative, a possible integration between IEEE 802.16 based network and previous generation of telecommunication systems (2G, ..., 3G) must be considered. A novel quality function based vertical handoff (VHO) algorithm, based on proposed velocity and average receive power estimation algorithms is discussed in this paper. The short-time Fourier analysis of received signal strength (RSS) is employed to obtain mobile speed and average received power estimates. Performance of quality function based VHO algorithm is evaluated by means of measure of quality of service (QoS). Simulation results show that proposed quality function, brings significant gains in QoS and more efficient use of resources can be achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the current clustering algorithms of complex networks, Laplacian-based spectral clustering algorithms have the advantage of rigorous mathematical basis and high accuracy. However, their applications are limited due to their dependence on prior knowledge, such as the number of clusters. For most of application scenarios, it is hard to obtain the number of clusters beforehand. To address this problem, we propose a novel clustering algorithm - Jordan-Form of Laplacian-Matrix based Clustering algorithm (JLMC). In JLMC, we propose a model to calculate the number (n) of clusters in a complex network based on the Jordan-Form of its corresponding Laplacian matrix. JLMC clusters the network into n clusters by using our proposed modularity density function (P function). We conduct extensive experiments over real and synthetic data, and the experimental results reveal that JLMC can accurately obtain the number of clusters in a complex network, and outperforms Fast-Newman algorithm and Girvan-Newman algorithm in terms of clustering accuracy and time complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.