964 resultados para Non-convex optimization
Resumo:
La presente tesi illustra e discute due attività legate all'ambito dei siti web, ovvero la localizzazione e l'ottimizzazione per i motori di ricerca (o SEO, dall'inglese "Search Engine Optimization"). Quest'ultima è un'attività mirata a permettere che i siti stessi ottengano un posizionamento migliore nella pagina dei risultati dei motori di ricerca e siano dunque più visibili agli utenti. Poiché la SEO prevede vari interventi sui siti web, alcuni dei quali implicano la manipolazione di codice HTML, essa viene spesso considerata come un'attività strettamente informatica. L'obiettivo della presente tesi, dunque, è quello di illustrare come i traduttori possano sfruttare le proprie competenze linguistiche per dedicarsi non soltanto alla localizzazione di siti web, ma anche alla loro ottimizzazione per i motori di ricerca. Per dimostrare l'applicabilità di tali tecniche è stato utilizzato come esempio pratico il sito web de "Il Palio di San Donato", un sito gestito dal Comune di Cividale del Friuli e finalizzato alla descrizione dell'omonima rievocazione storica cittadina. La tesi si compone di quattro capitoli. Nel primo capitolo vengono introdotti i principi teorici alla base della localizzazione di siti web, della SEO, della scrittura per il web e della traduzione per il settore turistico. Nel secondo capitolo, invece, viene descritto il sito del Palio di San Donato, esaminandone in particolare la struttura e i contenuti. Il terzo capitolo è dedicato alla descrizione del progetto di localizzazione a cui è stato sottoposto il sito in esame. Infine, il quarto capitolo contiene un breve commento relativo alle problematiche linguistiche, culturali e tecnologiche riscontrate durante il processo traduttivo e un elenco di strategie di SEO applicate a cinque pagine del sito web in esame, selezionate sulla base della possibilità di illustrare il maggior numero possibile di interventi di SEO attuabili dai traduttori.
Resumo:
The hERG voltage-gated potassium channel mediates the cardiac I(Kr) current, which is crucial for the duration of the cardiac action potential. Undesired block of the channel by certain drugs may prolong the QT interval and increase the risk of malignant ventricular arrhythmias. Although the molecular determinants of hERG block have been intensively studied, not much is known about its stereoselectivity. Levo-(S)-bupivacaine was the first drug reported to have a higher affinity to block hERG than its enantiomer. This study strives to understand the principles underlying the stereoselectivity of bupivacaine block with the help of mutagenesis analyses and molecular modeling simulations. Electrophysiological measurements of mutated hERG channels allowed for the identification of residues involved in bupivacaine binding and stereoselectivity. Docking and molecular mechanics simulations for both enantiomers of bupivacaine and terfenadine (a non-stereoselective blocker) were performed inside an open-state model of the hERG channel. The predicted binding modes enabled a clear depiction of ligand-protein interactions. Estimated binding affinities for both enantiomers were consistent with electrophysiological measurements. A similar computational procedure was applied to bupivacaine enantiomers towards two mutated hERG channels (Tyr652Ala and Phe656Ala). This study confirmed, at the molecular level, that bupivacaine stereoselectively binds the hERG channel. These results help to lay the foundation for structural guidelines to optimize the cardiotoxic profile of drug candidates in silico.
Resumo:
Umbilical cord blood (UCB) is a source of hematopoietic stem cells that initially was used exclusively for the hematopoietic reconstitution of pediatric patients. It is now suggested for use for adults as well, a fact that increases the pressure to obtain units with high cellularity. Therefore, the optimization of UCB processing is a priority.
Resumo:
Marshall's (1970) lemma is an analytical result which implies root-n-consistency of the distribution function corresponding to the Grenander (1956) estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,\infty).
Resumo:
Particulate matter (PM) emissions standards set by the US Environmental Protection Agency (EPA) have become increasingly stringent over the years. The EPA regulation for PM in heavy duty diesel engines has been reduced to 0.01 g/bhp-hr for the year 2010. Heavy duty diesel engines make use of an aftertreatment filtration device, the Diesel Particulate Filter (DPF). DPFs are highly efficient in filtering PM (known as soot) and are an integral part of 2010 heavy duty diesel aftertreatment system. PM is accumulated in the DPF as the exhaust gas flows through it. This PM needs to be removed by oxidation periodically for the efficient functioning of the filter. This oxidation process is also known as regeneration. There are 2 types of regeneration processes, namely active regeneration (oxidation of PM by external means) and passive oxidation (oxidation of PM by internal means). Active regeneration occurs typically in high temperature regions, about 500 - 600 °C, which is much higher than normal diesel exhaust temperatures. Thus, the exhaust temperature has to be raised with the help of external devices like a Diesel Oxidation Catalyst (DOC) or a fuel burner. The O2 oxidizes PM producing CO2 as oxidation product. In passive oxidation, one way of regeneration is by the use of NO2. NO2 oxidizes the PM producing NO and CO2 as oxidation products. The passive oxidation process occurs at lower temperatures (200 - 400 °C) in comparison to the active regeneration temperatures. Generally, DPF substrate walls are washcoated with catalyst material to speed up the rate of PM oxidation. The catalyst washcoat is observed to increase the rate of PM oxidation. The goal of this research is to develop a simple mathematical model to simulate the PM depletion during the active regeneration process in a DPF (catalyzed and non-catalyzed). A simple, zero-dimensional kinetic model was developed in MATLAB. Experimental data required for calibration was obtained by active regeneration experiments performed on PM loaded mini DPFs in an automated flow reactor. The DPFs were loaded with PM from the exhaust of a commercial heavy duty diesel engine. The model was calibrated to the data obtained from active regeneration experiments. Numerical gradient based optimization techniques were used to estimate the kinetic parameters of the model.
Resumo:
Synthetic oligonucleotides and peptides have found wide applications in industry and academic research labs. There are ~60 peptide drugs on the market and over 500 under development. The global annual sale of peptide drugs in 2010 was estimated to be $13 billion. There are three oligonucleotide-based drugs on market; among them, the FDA newly approved Kynamro was predicted to have a $100 million annual sale. The annual sale of oligonucleotides to academic labs was estimated to be $700 million. Both bio-oligomers are mostly synthesized on automated synthesizers using solid phase synthesis technology, in which nucleoside or amino acid monomers are added sequentially until the desired full-length sequence is reached. The additions cannot be complete, which generates truncated undesired failure sequences. For almost all applications, these impurities must be removed. The most widely used method is HPLC. However, the method is slow, expensive, labor-intensive, not amendable for automation, difficult to scale up, and unsuitable for high throughput purification. It needs large capital investment, and consumes large volumes of harmful solvents. The purification costs are estimated to be more than 50% of total production costs. Other methods for bio-oligomer purification also have drawbacks, and are less favored than HPLC for most applications. To overcome the problems of known biopolymer purification technologies, we have developed two non-chromatographic purification methods. They are (1) catching failure sequences by polymerization, and (2) catching full-length sequences by polymerization. In the first method, a polymerizable group is attached to the failure sequences of the bio-oligomers during automated synthesis; purification is achieved by simply polymerizing the failure sequences into an insoluble gel and extracting full-length sequences. In the second method, a polymerizable group is attached to the full-length sequences, which are then incorporated into a polymer; impurities are removed by washing, and pure product is cleaved from polymer. These methods do not need chromatography, and all drawbacks of HPLC no longer exist. Using them, purification is achieved by simple manipulations such as shaking and extraction. Therefore, they are suitable for large scale purification of oligonucleotide and peptide drugs, and also ideal for high throughput purification, which currently has a high demand for research projects involving total gene synthesis. The dissertation will present the details about the development of the techniques. Chapter 1 will make an introduction to oligodeoxynucleotides (ODNs), their synthesis and purification. Chapter 2 will describe the detailed studies of using the catching failure sequences by polymerization method to purify ODNs. Chapter 3 will describe the further optimization of the catching failure sequences by polymerization ODN purification technology to the level of practical use. Chapter 4 will present using the catching full-length sequence by polymerization method for ODN purification using acid-cleavable linker. Chapter 5 will make an introduction to peptides, their synthesis and purification. Chapter 6 will describe the studies using the catching full-length sequence by polymerization method for peptide purification.
Resumo:
Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.
Resumo:
BACKGROUND Vitamin D deficiency is prevalent in HIV-infected individuals and vitamin D supplementation is proposed according to standard care. This study aimed at characterizing the kinetics of 25(OH)D in a cohort of HIV-infected individuals of European ancestry to better define the influence of genetic and non-genetic factors on 25(OH)D levels. These data were used for the optimization of vitamin D supplementation in order to reach therapeutic targets. METHODS 1,397 25(OH)D plasma levels and relevant clinical information were collected in 664 participants during medical routine follow up visits. They were genotyped for 7 SNPs in 4 genes known to be associated with 25(OH)D levels. 25(OH)D concentrations were analyzed using a population pharmacokinetic approach. The percentage of individuals with 25(OH)D concentrations within the recommended range of 20-40ng/ml during 12 months of follow up and several dosage regimens were evaluated by simulation. RESULTS A one-compartment model with linear absorption and elimination was used to describe 25(OH)D pharmacokinetics, while integrating endogenous baseline plasma concentrations. Covariate analyses confirmed the effect of seasonality, body mass index, smoking habits, the analytical method, darunavir/r and the genetic variant in GC (rs2282679) on 25(OH)D concentrations. 11% of the interindividual variability in 25(OH)D levels was explained by seasonality and other non-genetic covariates and 1% by genetics. The optimal supplementation for severe vitamin D deficient patients was 300000 IU two times per year. CONCLUSIONS This analysis allowed identifying factors associated with 25(OH)D plasma levels in HIV-infected individuals. Improvement of dosage regimen and timing of vitamin D supplementation is proposed based on those results.
Resumo:
We explore a generalisation of the L´evy fractional Brownian field on the Euclidean space based on replacing the Euclidean norm with another norm. A characterisation result for admissible norms yields a complete description of all self-similar Gaussian random fields with stationary increments. Several integral representations of the introduced random fields are derived. In a similar vein, several non-Euclidean variants of the fractional Poisson field are introduced and it is shown that they share the covariance structure with the fractional Brownian field and converge to it. The shape parameters of the Poisson and Brownian variants are related by convex geometry transforms, namely the radial pth mean body and the polar projection transforms.
Resumo:
In a partially ordered semigroup with the duality (or polarity) transform, it is pos- sible to define a generalisation of continued fractions. General sufficient conditions for convergence of continued fractions are provided. Two particular applications concern the cases of convex sets with the Minkowski addition and the polarity transform and the family of non-negative convex functions with the Legendre–Fenchel and Artstein-Avidan–Milman transforms.
Resumo:
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.
Resumo:
Hospitals, like all organizations, have both a mission and a finite supply of resources with which to accomplish that mission. Because the inventory of therapeutic drugs is among the more expensive resources needed by a hospital to achieve its mission, a conceptual model of structure plus process equals outcome posits that adequate emphasis should be placed on optimization of the organization's investment in this important structural resource to provide highest quality outcomes. Therefore emphasis should be placed on the optimization of pharmacy inventory because lowering the financial investment in drug inventory and associated costs increases productive efficiency, a key element of quality. ^ In this study, a post-intervention analysis of a hospital pharmacy inventory management technology implementation at The University of Texas M.D. Anderson Cancer Center was conducted to determine if an intervention which reduced a hospital's financial investment in pharmaceutical inventory provided an opportunity to incrementally optimize the organization's mix of structural resources thereby improving quality of care. The results suggest that hospital pharmacies currently lacking technology to support automated purchasing logistics and perpetual, real-time inventory management for drugs may achieve measurable benefits from the careful implementation of such technology, enabling the hospital to lower its investment in on-hand inventory and, potentially, to reduce overall purchasing expenditures. ^ The importance of these savings to the hospital and potentially to the patient should not be underestimated for their ability to generate funding for previously unfunded public health programs or in their ability to provide financial relief to patients in the form of lower drug costs given the current climate of escalating healthcare costs and tightening reimbursements.^
Resumo:
Swarm colonies reproduce social habits. Working together in a group to reach a predefined goal is a social behaviour occurring in nature. Linear optimization problems have been approached by different techniques based on natural models. In particular, Particles Swarm optimization is a meta-heuristic search technique that has proven to be effective when dealing with complex optimization problems. This paper presents and develops a new method based on different penalties strategies to solve complex problems. It focuses on the training process of the neural networks, the constraints and the election of the parameters to ensure successful results and to avoid the most common obstacles when searching optimal solutions.
Resumo:
Non-failure analysis aims at inferring that predicate calis in a program will never fail. This type of information has many applications in functional/logic programming. It is essential for determining lower bounds on the computational cost of calis, useful in the context of program parallelization, instrumental in partial evaluation and other program transformations, and has also been used in query optimization. In this paper, we re-cast the non-failure analysis proposed by Debray et al. as an abstract interpretation, which not only allows to investígate it from a standard and well understood theoretical framework, but has also several practical advantages. It allows us to incorpórate non-failure analysis into a standard, generic abstract interpretation engine. The analysis thus benefits from the fixpoint propagation algorithm, which leads to improved information propagation. Also, the analysis takes advantage of the multi-variance of the generic engine, so that it is now able to infer sepárate non-failure information for different cali patterns. Moreover, the implementation is simpler, and allows to perform non-failure and covering analyses alongside other analyses, such as those for modes and types, in the same framework. Finally, besides the precisión improvements and the additional simplicity, our implementation (in the Ciao/CiaoPP multiparadigm programming system) also shows better efRciency.