974 resultados para Simulated annealing algorithm
Resumo:
The care for a patient with ulcerative colitis (UC) remains challenging despite the fact that morbidity and mortality rates have been considerably reduced during the last 30 years. The traditional management with intravenous corticosteroids was modified by the introduction of ciclosporin and infliximab. In this review, we focus on the treatment of patients with moderate to severe UC. Four typical clinical scenarios are defined and discussed in detail. The treatment recommendations are based on current literature, published guidelines and reviews, and were discussed at a consensus meeting of Swiss experts in the field. Comprehensive treatment algorithms were developed, aimed for daily clinical practice.
Resumo:
In this paper, we are proposing a methodology to determine the most efficient and least costly way of crew pairing optimization. We are developing a methodology based on algorithm optimization on Eclipse opensource IDE using the Java programming language to solve the crew scheduling problems.
Resumo:
Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Resumo:
Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.
Resumo:
This paper describes a maximum likelihood method using historical weather data to estimate a parametric model of daily precipitation and maximum and minimum air temperatures. Parameter estimates are reported for Brookings, SD, and Boone, IA, to illustrate the procedure. The use of this parametric model to generate stochastic time series of daily weather is then summarized. A soil temperature model is described that determines daily average, maximum, and minimum soil temperatures based on air temperatures and precipitation, following a lagged process due to soil heat storage and other factors.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
OBJECTIVES: (1) To evaluate the changes in surface roughness and gloss after simulated toothbrushing of 9 composite materials and 2 ceramic materials in relation to brushing time and load in vitro; (2) to assess the relationship between surface gloss and surface roughness. METHODS: Eight flat specimens of composite materials (microfilled: Adoro, Filtek Supreme, Heliomolar; microhybrid: Four Seasons, Tetric EvoCeram; hybrid: Compoglass F, Targis, Tetric Ceram; macrohybrid: Grandio), two ceramic materials (IPS d.SIGN and IPS Empress polished) were fabricated according to the manufacturer's instructions and optimally polished with up to 4000 grit SiC. The specimens were subjected to a toothbrushing (TB) simulation device (Willytec) with rotating movements, toothpaste slurry and at three different loads (100g/250g/350g). At hourly intervals from 1h to 10h TB, mean surface roughness Ra was measured with an optical sensor and the surface gloss (Gl) with a glossmeter. Statistical analysis was performed for log-transformed Ra data applying two-way ANOVA to evaluate the interaction between load and material and load and brushing time. RESULTS: There was a significant interaction between material and load as well as between load and brushing time (p<0.0001). The microhybrid and hybrid materials demonstrated more surface deterioration with higher loads, whereas with the microfilled resins Heliomolar and Adoro it was vice versa. For ceramic materials, no or little deterioration was observed over time and independent of the load. The ceramic materials and 3 of the composite materials (roughness) showed no further deterioration after 5h of toothbrushing. Mean surface gloss was the parameter which discriminated best between the materials, followed by mean surface roughness Ra. There was a strong correlation between surface gloss and surface roughness for all the materials except the ceramics. The evaluation of the deterioration curves of individual specimens revealed a more or less synchronous course suspecting hinting specific external conditions and not showing the true variability in relation to the tested material. SIGNIFICANCE: The surface roughness and gloss of dental materials changes with brushing time and load and thus results in different material rankings. Apart from Grandio, the hybrid composite resins were more prone to surface changes than microfilled composites. The deterioration potential of a composite material can be quickly assessed by measuring surface gloss. For this purpose, a brushing time of 10h (=72,000 strokes) is needed. In further comparative studies, specimens of different materials should be tested in one series to estimate the true variability.
Resumo:
Accurate estimates of water losses by evaporation from shallow water tables are important for hydrological, agricultural, and climatic purposes. An experiment was conducted in a weighing lysimeter to characterize the diurnal dynamics of evaporation under natural conditions. Sampling revealed a completely dry surface sand layer after 5 days of evaporation. Its thickness was <1 cm early in the morning, increasing to reach 4?5 cm in the evening. This evidence points out fundamental limitations of the approaches that assume hydraulic connectivity from the water table up to the surface, as well as those that suppose monotonic drying when unsteady conditions prevail. The computed vapor phase diffusion rates from the apparent drying front based on Fick's law failed to reproduce the measured cumulative evaporation during the sampling day. We propose that two processes rule natural evaporation resulting from daily fluctuations of climatic variables: (i) evaporation of water, stored during nighttime due to redistribution and vapor condensation, directly into the atmosphere from the soil surface during the early morning hours, that could be simulated using a mass transfer approach and (ii) subsurface evaporation limited by Fickian diffusion, afterward. For the conditions prevailing during the sampling day, the amount of water stored at the vicinity of the soil surface was 0.3 mm and was depleted before 11:00. Combining evaporation from the surface before 11:00 and subsurface evaporation limited by Fickian diffusion after that time, the agreement between the estimated and measured cumulative evaporation was significantly improved.
Resumo:
ABSTRACT: In sexual assault cases, autosomal DNA analysis of gynecological swabs is a challenge, as the presence of a large quantity of female material may prevent the detection of the male DNA. A solution to this problem is differential DNA extraction, but as there are different protocols, it was decided to test their efficiency on simulated casework samples. Four difficult samples were sent to the nine Swiss laboratories active in the forensic genetics. They used their routine protocols to separate the epithelial cell fraction, enriched with the non-sperm DNA, from the sperm fraction. DNA extracts were then sent to the organizing laboratory for analysis. Estimates of male to female DNA ratio without differential DNA extraction ranged from 1:38 to 1:339, depending on the semen used to prepare the samples. After differential DNA extraction, most of the ratios ranged from 1:12 to 9:1, allowing the detection of the male DNA. Compared to direct DNA extraction, cell separation resulted in losses of 94-98% of the male DNA. As expected, more male DNA was generally present in the sperm than in the epithelial cell fraction. However, for about 30% of the samples, the reverse trend was observed. The recovery of male and female DNA was highly variable depending on the laboratories. Experimental design similar to the one used in this study may help for local protocol testing and improvement.
Resumo:
Previous studies have found evidence of a self-serving bias in bargaining and dispute resolution. We use experimental data to test for this effect in a simulated labor relatonship. We finda consistent discrepancy between employer beliefs and employee actions that can only be attributed to self-serving biases. This discrepancy is evident through stated beliefs, revealed satisfaction, and actual actions. We present evidenceand discuss implications.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
While papers such as Akerlof and Yellen (1990) and Rabin (1993) argue that psychological considerations such as fairness and reciprocity are important in individual decision-making, there is little explicit empirical evidence of reciprocal altruism in economic environments. This paper tests whether attribution of volition in choosing a wage has a significant effect on subsequent costly effort provision. An experiment was conducted in which subjects are first randomly divided into groups of employers and employees. Wages were selected and employees asked to choose an effort level, where increased effort is costly to the employee, but highly beneficial to the employer. The wage-determination process was common knowledge and wages were chosen either by the employer or by an external process. There is evidence for both distributional concerns and reciprocal altruism. The slope of the effort/wage profile is clearly positive in all cases, but is significantly higher when wages are chosen by the employer, offering support for the hypothesis of reciprocity. There are implications for models of utility and a critique of some current models is presented.