960 resultados para Improved sequential algebraic algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work of this thesis is concerned with fitting Hypo-exponential and Erlang phase type distributions for modeling real life processes with non-exponential service time. There exist situations where exponential distributions cannot explain the distribution of service time properly. This thesis presents the application of two traditional statistical estimation techniques to approximate the service distributions of processes with coefficient of variation less than one. It also presents an algorithm to fit Hypo-exponential distribution for complex situations which can’t be handled properly with traditional estimation techniques. The result shows the effect of variation of sample size and other parameters on the efficiency of the estimation techniques by comparing their respective outputs. Furthermore it checks how accurately the proposed algorithm approximates a given distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the task of collaborative recommendation of photo-taking locations. We use datasets of geotagged photos. We map their locations to a location grid using a geohashing algorithm, resulting in a user x location implicit feedback matrix. Our improvements relative to previous work are twofold. First, we create virtual ratings by spreading users' preferences to neighbouring grid locations. This makes the assumption that users have some preference for locations close to the ones in which they take their photos. These virtual ratings help overcome the discrete nature of the geohashing. Second, we normalize the implicit frequency-based ratings to a 1-5 scale using a method that has been found to be useful in music recommendation algorithms. We demonstrate the advantages of our approach with new experiments that show large increases in hit rate and related metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resuscitation and stabilization are key issues in Intensive Care Burn Units and early survival predictions help to decide the best clinical action during these phases. Current survival scores of burns focus on clinical variables such as age or the body surface area. However, the evolution of other parameters (e.g. diuresis or fluid balance) during the first days is also valuable knowledge. In this work we suggest a methodology and we propose a Temporal Data Mining algorithm to estimate the survival condition from the patient’s evolution. Experiments conducted on 480 patients show the improvement of survival prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper describes a novel, simple and reliable differential pulse voltammetric method for determining amitriptyline (AMT) in pharmaceutical formulations. It has been described for many authors that this antidepressant is electrochemically inactive at carbon electrodes. However, the procedure proposed herein consisted in electrochemically oxidizing AMT at an unmodified carbon nanotube paste electrode in the presence of 0.1 mol L(-1) sulfuric acid used as electrolyte. At such concentration, the acid facilitated the AMT electroxidation through one-electron transfer at 1.33 V vs. Ag/AgCl, as observed by the augmentation of peak current. Concerning optimized conditions (modulation time 5 ms, scan rate 90 mV s(-1), and pulse amplitude 120 mV) a linear calibration curve was constructed in the range of 0.0-30.0 μmol L(-1), with a correlation coefficient of 0.9991 and a limit of detection of 1.61 μmol L(-1). The procedure was successfully validated for intra- and inter-day precision and accuracy. Moreover, its feasibility was assessed through analysis of commercial pharmaceutical formulations and it has been compared to the UV-vis spectrophotometric method used as standard analytical technique recommended by the Brazilian Pharmacopoeia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Retinal pigment epithelium cells, along with tight junction (TJ) proteins, constitute the outer blood retinal barrier (BRB). Contradictory findings suggest a role for the outer BRB in the pathogenesis of diabetic retinopathy (DR). The aim of this study was to investigate whether the mechanisms involved in these alterations are sensitive to nitrosative stress, and if cocoa or epicatechin (EC) protects from this damage under diabetic (DM) milieu conditions. Cells of a human RPE line (ARPE-19) were exposed to high-glucose (HG) conditions for 24 hours in the presence or absence of cocoa powder containing 0.5% or 60.5% polyphenol (low-polyphenol cocoa [LPC] and high-polyphenol cocoa [HPC], respectively). Exposure to HG decreased claudin-1 and occludin TJ expressions and increased extracellular matrix accumulation (ECM), whereas levels of TNF-α and inducible nitric oxide synthase (iNOS) were upregulated, accompanied by increased nitric oxide levels. This nitrosative stress resulted in S-nitrosylation of caveolin-1 (CAV-1), which in turn increased CAV-1 traffic and its interactions with claudin-1 and occludin. This cascade was inhibited by treatment with HPC or EC through δ-opioid receptor (DOR) binding and stimulation, thereby decreasing TNF-α-induced iNOS upregulation and CAV-1 endocytosis. The TJ functions were restored, leading to prevention of paracellular permeability, restoration of resistance of the ARPE-19 monolayer, and decreased ECM accumulation. The detrimental effects on TJs in ARPE-19 cells exposed to DM milieu occur through a CAV-1 S-nitrosylation-dependent endocytosis mechanism. High-polyphenol cocoa or EC exerts protective effects through DOR stimulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper describes the synthesis of molecularly imprinted polymer - poly(methacrylic acid)/silica and reports its performance feasibility with desired adsorption capacity and selectivity for cholesterol extraction. Two imprinted hybrid materials were synthesized at different methacrylic acid (MAA)/tetraethoxysilane (TEOS) molar ratios (6:1 and 1:5) and characterized by FT-IR, TGA, SEM and textural data. Cholesterol adsorption on hybrid materials took place preferably in apolar solvent medium, especially in chloroform. From the kinetic data, the equilibrium time was reached quickly, being 12 and 20 min for the polymers synthesized at MAA/TEOS molar ratio of 6:1 and 1:5, respectively. The pseudo-second-order model provided the best fit for cholesterol adsorption on polymers, confirming the chemical nature of the adsorption process, while the dual-site Langmuir-Freundlich equation presented the best fit to the experimental data, suggesting the existence of two kinds of adsorption sites on both polymers. The maximum adsorption capacities obtained for the polymers synthesized at MAA/TEOS molar ratios of 6:1 and 1:5 were found to be 214.8 and 166.4 mg g(-1), respectively. The results from isotherm data also indicated higher adsorption capacity for both imprinted polymers regarding to corresponding non-imprinted polymers. Nevertheless, taking into account the retention parameters and selectivity of cholesterol in the presence of structurally analogue compounds (5-α-cholestane and 7-dehydrocholesterol), it was observed that the polymer synthesized at the MAA/TEOS molar ratio of 6:1 was much more selective for cholesterol than the one prepared at the ratio of 1:5, thus suggesting that selective binding sites ascribed to the carboxyl group from MAA play a central role in the imprinting effect created on MIP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A base-cutter represented for a mechanism of four bars, was developed using the Autocad program. The normal force of reaction of the profile in the contact point was determined through the dynamic analysis. The equations of dynamic balance were based on the laws of Newton-Euler. The linkage was subject to an optimization technique that considered the peak value of soil reaction force as the objective function to be minimized while the link lengths and the spring constant varied through a specified range. The Algorithm of Sequential Quadratic Programming-SQP was implemented of the program computational Matlab. Results were very encouraging; the maximum value of the normal reaction force was reduced from 4,250.33 to 237.13 N, making the floating process much less disturbing to the soil and the sugarcane rate. Later, others variables had been incorporated the mechanism optimized and new otimization process was implemented .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To compare the Full Threshold (FT) and SITA Standard (SS) strategies in glaucomatous patients undergoing automated perimetry for the first time. METHODS: Thirty-one glaucomatous patients who had never undergone perimetry underwent automated perimetry (Humphrey, program 30-2) with both FT and SS on the same day, with an interval of at least 15 minutes. The order of the examination was randomized, and only one eye per patient was analyzed. Three analyses were performed: a) all the examinations, regardless of the order of application; b) only the first examinations; c) only the second examinations. In order to calculate the sensitivity of both strategies, the following criteria were used to define abnormality: glaucoma hemifield test (GHT) outside normal limits, pattern standard deviation (PSD) <5%, or a cluster of 3 adjacent points with p<5% at the pattern deviation probability plot. RESULTS: When the results of all examinations were analyzed regardless of the order in which they were performed, the number of depressed points with p<0.5% in the pattern deviation probability map was significantly greater with SS (p=0.037), and the sensitivities were 87.1% for SS and 77.4% for FT (p=0.506). When only the first examinations were compared, there were no statistically significant differences regarding the number of depressed points, but the sensitivity of SS (100%) was significantly greater than that obtained with FT (70.6%) (p=0.048). When only the second examinations were compared, there were no statistically significant differences regarding the number of depressed points, and the sensitivities of SS (76.5%) and FT (85.7%) (p=0.664). CONCLUSION: SS may have a higher sensitivity than FT in glaucomatous patients undergoing automated perimetry for the first time. However, this difference tends to disappear in subsequent examinations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJETIVO: Desenvolver simulação computadorizada de ablação para produzir lentes de contato personalizadas a fim de corrigir aberrações de alta ordem. MÉTODOS: Usando dados reais de um paciente com ceratocone, mensurados em um aberrômetro ("wavefront") com sensor Hartmann-Shack, foram determinados as espessuras de lentes de contato que compensam essas aberrações assim como os números de pulsos necessários para fazer ablação as lentes especificamente para este paciente. RESULTADOS: Os mapas de correção são apresentados e os números dos pulsos foram calculados, usando feixes com a largura de 0,5 mm e profundidade de ablação de 0,3 µm. CONCLUSÕES: Os resultados simulados foram promissores, mas ainda precisam ser aprimorados para que o sistema de ablação "real" possa alcançar a precisão desejada.