882 resultados para Branch and bound algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrogels are polymeric materials used in many pharmaceutical and biomedical applications due to their ability to form 3D hydrophilic polymeric networks, which can absorb large amounts of water. In the present work, polyethylene glycols (PEG) were introduced into the hydrogel liquid phase in order to improve the mechanical properties of hydrogels composed of 2-hydroxyethylacrylate and 2-hydroxyethylmethacrylate (HEA–HEMA) synthesized with different co-monomer compositions and equilibrated in water or in 20 % water–PEG 400 and 600 solutions. The thermoanalytical techniques [differential scanning calorimetry (DSC) and thermogravimetry (TG)] were used to evaluate the amount and properties of free and bound water in HEA–HEMA hydrogels. The internal structure and the mechanical properties of hydrogels were studied using scanning electron microscopy and friability assay. TG “loss-on-drying” experiments were applied to study the water-retention properties of hydrogels, whereas the combination of TG and DSC allowed estimating the total amount of freezable and non-freezing water in hydrogels. The results show that the addition of viscous co-solvent (PEG) to the liquid medium results in significant improvement of the mechanical properties of HEA–HEMA hydrogels and also slightly retards the water loss from the hydrogels. A redistribution of free and bound water in the hydrogels equilibrated in mixed solutions containing 20 vol% of PEGs takes place.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important application of Big Data Analytics is the real-time analysis of streaming data. Streaming data imposes unique challenges to data mining algorithms, such as concept drifts, the need to analyse the data on the fly due to unbounded data streams and scalable algorithms due to potentially high throughput of data. Real-time classification algorithms that are adaptive to concept drifts and fast exist, however, most approaches are not naturally parallel and are thus limited in their scalability. This paper presents work on the Micro-Cluster Nearest Neighbour (MC-NN) classifier. MC-NN is based on an adaptive statistical data summary based on Micro-Clusters. MC-NN is very fast and adaptive to concept drift whilst maintaining the parallel properties of the base KNN classifier. Also MC-NN is competitive compared with existing data stream classifiers in terms of accuracy and speed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the top ten most influential data mining algorithms, k-means, is known for being simple and scalable. However, it is sensitive to initialization of prototypes and requires that the number of clusters be specified in advance. This paper shows that evolutionary techniques conceived to guide the application of k-means can be more computationally efficient than systematic (i.e., repetitive) approaches that try to get around the above-mentioned drawbacks by repeatedly running the algorithm from different configurations for the number of clusters and initial positions of prototypes. To do so, a modified version of a (k-means based) fast evolutionary algorithm for clustering is employed. Theoretical complexity analyses for the systematic and evolutionary algorithms under interest are provided. Computational experiments and statistical analyses of the results are presented for artificial and text mining data sets. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new technique and two algorithms to bulk-load data into multi-way dynamic metric access methods, based on the covering radius of representative elements employed to organize data in hierarchical data structures. The proposed algorithms are sample-based, and they always build a valid and height-balanced tree. We compare the proposed algorithm with existing ones, showing the behavior to bulk-load data into the Slim-tree metric access method. After having identified the worst case of our first algorithm, we describe adequate counteractions in an elegant way creating the second algorithm. Experiments performed to evaluate their performance show that our bulk-loading methods build trees faster than the sequential insertion method regarding construction time, and that it also significantly improves search performance. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper tackles the problem of showing that evolutionary algorithms for fuzzy clustering can be more efficient than systematic (i.e. repetitive) approaches when the number of clusters in a data set is unknown. To do so, a fuzzy version of an Evolutionary Algorithm for Clustering (EAC) is introduced. A fuzzy cluster validity criterion and a fuzzy local search algorithm are used instead of their hard counterparts employed by EAC. Theoretical complexity analyses for both the systematic and evolutionary algorithms under interest are provided. Examples with computational experiments and statistical analyses are also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the two-level network design problem with intermediate facilities. This problem consists of designing a minimum cost network respecting some requirements, usually described in terms of the network topology or in terms of a desired flow of commodities between source and destination vertices. Each selected link must receive one of two types of edge facilities and the connection of different edge facilities requires a costly and capacitated vertex facility. We propose a hybrid decomposition approach which heuristically obtains tentative solutions for the vertex facilities number and location and use these solutions to limit the computational burden of a branch-and-cut algorithm. We test our method on instances of the power system secondary distribution network design problem. The results show that the method is efficient both in terms of solution quality and computational times. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cross sections for the (6)Li(p,gamma)(7)Be, (7)Li(n,gamma)(8)Li (8)Li(n,gamma)(9)Li and (8)Li(p,gamma)(9)Be capture reactions have been investigated in the framework of the potential model. The main ingredients of the potential model are the potentials used to generate the continuum and bound-state wave functions and spectroscopic factors of the corresponding bound systems. The spectroscopic factors for the (7)Li circle times n=(8)Li(gs), (8)Li circle times n=(9)Li(gs) bound systems were obtained from a FR-DWBA analysis of neutron transfer reactions induced by (8)Li radioactive beam on a (9)Be target, while spetroscopic factor for the (8)Li circle times n=(9)Be(gs) bound system were obained from a proton transfer reaction. From the obtained capture reaction cross section, reaction rate for the (8)Li(n,gamma)(9)Li and (8)Li(p,gamma)(9)Be direct neutron and proton capture were determined and compared with other experimental and calculated values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report vibrational excitation (v(i) = 0 -> v(f) = 1) cross-sections for positron scattering by H(2) and model calculations for the (v(i) = 0 -> v(f) = 1) excitation of the C-C symmetric stretch mode of C(2)H(2). The Feshbach projection operator formalism was employed to vibrationally resolve the fixed-nuclei phase shifts obtained with the Schwinger multichannel method. The near threshold behavior of H(2) and C(2)H(2) significantly differ in the sense that no low lying singularity (either virtual or bound state) was found for the former, while a e(+)-acetylene virtual state was found at the equilibrium geometry (this virtual state becomes a bound state upon stretching the molecule). For C(2)H(2), we also performed model calculations comparing excitation cross-sections arising from virtual (-i kappa(0)) and bound (+i kappa(0)) states symmetrically located around the origin of the complex momentum plane (i.e. having the same kappa(0)). The virtual state is seen to significantly couple to vibrations, and similar cross-sections were obtained for shallow bound and virtual states. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The introduction of a new technology High Speed Downlink Packet Access (HSDPA) in the Release 5 of the 3GPP specifications raises the question about its performance capabilities. HSDPA is a promising technology which gives theoretical rates up to 14.4 Mbits. The main objective of this thesis is to discuss the system level performance of HSDPAMainly the thesis exploration focuses on the Packet Scheduler because it is the central entity of the HSDPA design. Due to its function, the Packet Scheduler has a direct impact on the HSDPA system performance. Similarly, it also determines the end user performance, and more specifically the relative performance between the users in the cell.The thesis analyzes several Packet Scheduling algorithms that can optimize the trade-off between system capacity and end user performance for the traffic classes targeted in this thesis.The performance evaluation of the algorithms in the HSDPA system are carried out under computer aided simulations that are assessed under realistic conditions to predict the results as precise on the algorithms efficiency. The simulation of the HSDPA system and the algorithms are coded in C/C++ language

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combinatorial optimization problems, are one of the most important types of problems in operational research. Heuristic and metaheuristics algorithms are widely applied to find a good solution. However, a common problem is that these algorithms do not guarantee that the solution will coincide with the optimum and, hence, many solutions to real world OR-problems are afflicted with an uncertainty about the quality of the solution. The main aim of this thesis is to investigate the usability of statistical bounds to evaluate the quality of heuristic solutions applied to large combinatorial problems. The contributions of this thesis are both methodological and empirical. From a methodological point of view, the usefulness of statistical bounds on p-median problems is thoroughly investigated. The statistical bounds have good performance in providing informative quality assessment under appropriate parameter settings. Also, they outperform the commonly used Lagrangian bounds. It is demonstrated that the statistical bounds are shown to be comparable with the deterministic bounds in quadratic assignment problems. As to empirical research, environment pollution has become a worldwide problem, and transportation can cause a great amount of pollution. A new method for calculating and comparing the CO2-emissions of online and brick-and-mortar retailing is proposed. It leads to the conclusion that online retailing has significantly lesser CO2-emissions. Another problem is that the Swedish regional division is under revision and the border effect to public service accessibility is concerned of both residents and politicians. After analysis, it is shown that borders hinder the optimal location of public services and consequently the highest achievable economic and social utility may not be attained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results: We propose the use of double hierarchical generalized linear models (DHGLM), where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions: We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Expressing contractual agreements electronically potentially allows agents to automatically perform functions surrounding contract use: establishment, fulfilment, renegotiation etc. For such automation to be used for real business concerns, there needs to be a high level of trust in the agent-based system. While there has been much research on simulating trust between agents, there are areas where such trust is harder to establish. In particular, contract proposals may come from parties that an agent has had no prior interaction with and, in competitive business-to-business environments, little reputation information may be available. In human practice, trust in a proposed contract is determined in part from the content of the proposal itself, and the similarity of the content to that of prior contracts, executed to varying degrees of success. In this paper, we argue that such analysis is also appropriate in automated systems, and to provide it we need systems to record salient details of prior contract use and algorithms for assessing proposals on their content. We use provenance technology to provide the former and detail algorithms for measuring contract success and similarity for the latter, applying them to an aerospace case study.