856 resultados para Algorithmic Solution
Resumo:
This paper presents an algorithmic solution for management of related text objects, in which are integrated algorithms for their extraction from paper or electronic format, for their storage and processing in a relational database. The developed algorithms for data extraction and data analysis enable one to find specific features and relations between the text objects from the database. The algorithmic solution is applied to data from the field of phytopharmacy in Bulgaria. It can be used as a tool and methodology for other subject areas where there are complex relationships between text objects.
Resumo:
This presentation explains how we move from a problem definition to an algorithmic solution using simple tools like noun verb analysis. It also looks at how we might judge the quality of a solution through coupling, cohesion and generalisation.
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
For any vacuum initial data set, we define a local, non-negative scalar quantity which vanishes at every point of the data hypersurface if and only if the data are Kerr initial data. Our scalar quantity only depends on the quantities used to construct the vacuum initial data set which are the Riemannian metric defined on the initial data hypersurface and a symmetric tensor which plays the role of the second fundamental form of the embedded initial data hypersurface. The dependency is algorithmic in the sense that given the initial data one can compute the scalar quantity by algebraic and differential manipulations, being thus suitable for an implementation in a numerical code. The scalar could also be useful in studies of the non-linear stability of the Kerr solution because it serves to measure the deviation of a vacuum initial data set from the Kerr initial data in a local and algorithmic way.
Resumo:
This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.
Resumo:
La gestion des ressources, équipements, équipes de travail, et autres, devrait être prise en compte lors de la conception de tout plan réalisable pour le problème de conception de réseaux de services. Cependant, les travaux de recherche portant sur la gestion des ressources et la conception de réseaux de services restent limités. La présente thèse a pour objectif de combler cette lacune en faisant l’examen de problèmes de conception de réseaux de services prenant en compte la gestion des ressources. Pour ce faire, cette thèse se décline en trois études portant sur la conception de réseaux. La première étude considère le problème de capacitated multi-commodity fixed cost network design with design-balance constraints(DBCMND). La structure multi-produits avec capacité sur les arcs du DBCMND, de même que ses contraintes design-balance, font qu’il apparaît comme sous-problème dans de nombreux problèmes reliés à la conception de réseaux de services, d’où l’intérêt d’étudier le DBCMND dans le contexte de cette thèse. Nous proposons une nouvelle approche pour résoudre ce problème combinant la recherche tabou, la recomposition de chemin, et une procédure d’intensification de la recherche dans une région particulière de l’espace de solutions. Dans un premier temps la recherche tabou identifie de bonnes solutions réalisables. Ensuite la recomposition de chemin est utilisée pour augmenter le nombre de solutions réalisables. Les solutions trouvées par ces deux méta-heuristiques permettent d’identifier un sous-ensemble d’arcs qui ont de bonnes chances d’avoir un statut ouvert ou fermé dans une solution optimale. Le statut de ces arcs est alors fixé selon la valeur qui prédomine dans les solutions trouvées préalablement. Enfin, nous utilisons la puissance d’un solveur de programmation mixte en nombres entiers pour intensifier la recherche sur le problème restreint par le statut fixé ouvert/fermé de certains arcs. Les tests montrent que cette approche est capable de trouver de bonnes solutions aux problèmes de grandes tailles dans des temps raisonnables. Cette recherche est publiée dans la revue scientifique Journal of heuristics. La deuxième étude introduit la gestion des ressources au niveau de la conception de réseaux de services en prenant en compte explicitement le nombre fini de véhicules utilisés à chaque terminal pour le transport de produits. Une approche de solution faisant appel au slope-scaling, la génération de colonnes et des heuristiques basées sur une formulation en cycles est ainsi proposée. La génération de colonnes résout une relaxation linéaire du problème de conception de réseaux, générant des colonnes qui sont ensuite utilisées par le slope-scaling. Le slope-scaling résout une approximation linéaire du problème de conception de réseaux, d’où l’utilisation d’une heuristique pour convertir les solutions obtenues par le slope-scaling en solutions réalisables pour le problème original. L’algorithme se termine avec une procédure de perturbation qui améliore les solutions réalisables. Les tests montrent que l’algorithme proposé est capable de trouver de bonnes solutions au problème de conception de réseaux de services avec un nombre fixe des ressources à chaque terminal. Les résultats de cette recherche seront publiés dans la revue scientifique Transportation Science. La troisième étude élargie nos considérations sur la gestion des ressources en prenant en compte l’achat ou la location de nouvelles ressources de même que le repositionnement de ressources existantes. Nous faisons les hypothèses suivantes: une unité de ressource est nécessaire pour faire fonctionner un service, chaque ressource doit retourner à son terminal d’origine, il existe un nombre fixe de ressources à chaque terminal, et la longueur du circuit des ressources est limitée. Nous considérons les alternatives suivantes dans la gestion des ressources: 1) repositionnement de ressources entre les terminaux pour tenir compte des changements de la demande, 2) achat et/ou location de nouvelles ressources et leur distribution à différents terminaux, 3) externalisation de certains services. Nous présentons une formulation intégrée combinant les décisions reliées à la gestion des ressources avec les décisions reliées à la conception des réseaux de services. Nous présentons également une méthode de résolution matheuristique combinant le slope-scaling et la génération de colonnes. Nous discutons des performances de cette méthode de résolution, et nous faisons une analyse de l’impact de différentes décisions de gestion des ressources dans le contexte de la conception de réseaux de services. Cette étude sera présentée au XII International Symposium On Locational Decision, en conjonction avec XXI Meeting of EURO Working Group on Locational Analysis, Naples/Capri (Italy), 2014. En résumé, trois études différentes sont considérées dans la présente thèse. La première porte sur une nouvelle méthode de solution pour le "capacitated multi-commodity fixed cost network design with design-balance constraints". Nous y proposons une matheuristique comprenant la recherche tabou, la recomposition de chemin, et l’optimisation exacte. Dans la deuxième étude, nous présentons un nouveau modèle de conception de réseaux de services prenant en compte un nombre fini de ressources à chaque terminal. Nous y proposons une matheuristique avancée basée sur la formulation en cycles comprenant le slope-scaling, la génération de colonnes, des heuristiques et l’optimisation exacte. Enfin, nous étudions l’allocation des ressources dans la conception de réseaux de services en introduisant des formulations qui modèlent le repositionnement, l’acquisition et la location de ressources, et l’externalisation de certains services. À cet égard, un cadre de solution slope-scaling développé à partir d’une formulation en cycles est proposé. Ce dernier comporte la génération de colonnes et une heuristique. Les méthodes proposées dans ces trois études ont montré leur capacité à trouver de bonnes solutions.
Resumo:
Recently, Branzei, Dimitrov, and Tijs (2003) introduced cooperative interval-valued games. Among other insights, the notion of an interval core has been coined and proposed as a solution concept for interval-valued games. In this paper we will present a general mathematical programming algorithm which can be applied to find an element in the interval core. As an example, we discuss lot sizing with uncertain demand to provide an application for interval-valued games and to demonstrate how interval core elements can be computed. Also, we reveal that pitfalls exist if interval core elements are computed in a straightforward manner by considering the interval borders separately.
Resumo:
When it comes to designing a structure, architects and engineers want to join forces in order to create and build the most beautiful and efficient building. From finding new shapes and forms to optimizing the stability and the resistance, there is a constant link to be made between both professions. In architecture, there has always been a particular interest in creating new shapes and types of a structure inspired by many different fields, one of them being nature itself. In engineering, the selection of optimum has always dictated the way of thinking and designing structures. This mindset led through studies to the current best practices in construction. However, both disciplines were limited by the traditional manufacturing constraints at a certain point. Over the last decades, much progress was made from a technological point of view, allowing to go beyond today's manufacturing constraints. With the emergence of Wire-and-Arc Additive Manufacturing (WAAM) combined with Algorithmic-Aided Design (AAD), architects and engineers are offered new opportunities to merge architectural beauty and structural efficiency. Both technologies allow for exploring and building unusual and complex structural shapes in addition to a reduction of costs and environmental impacts. Through this study, the author wants to make use of previously mentioned technologies and assess their potential, first to design an aesthetically appreciated tree-like column with the idea of secondly proposing a new type of standardized and optimized sandwich cross-section to the construction industry. Parametric algorithms to model the dendriform column and the new sandwich cross-section are developed and presented in detail. A catalog draft of the latter and methods to establish it are then proposed and discussed. Finally, the buckling behavior of this latter is assessed considering standard steel and WAAM material properties.
Resumo:
Cardiac arrest during heart surgery is a common procedure and allows the surgeon to perform surgical procedures in an environment free of blood and movement. Using a model of isolated rat heart, the authors compare a new cardioplegic solution containing histidine-tryptophan-glutamate (group 2) with the histidine-tryptophan-alphacetoglutarate (group 1) routinely used by some cardiac surgeons. To assess caspase, IL-8 and KI-67 in isolated rat hearts using immunohistochemistry. 20 Wistar male rats were anesthetized and heparinized. The chest was opened, cardioctomy was performed and 40 ml/kg of the appropriate cardioplegic solution was infused. The hearts were kept for 2 hours at 4ºC in the same solution, and thereafter, placed in the Langendorff apparatus for 30 minutes with Ringer-Locke solution. Immunohistochemistry analysis of caspase, IL-8, and KI-67 were performed. The concentration of caspase was lower in group 2 and Ki-67 was higher in group 2, both P<0.05. There was no statistical difference between the values of IL-8 between the groups. Histidine-tryptophan-glutamate solution was better than histidine-tryptophan-alphacetoglutarate solution because it reduced caspase (apoptosis), increased KI-67 (cell proliferation), and showed no difference in IL-8 levels compared to group 1. This suggests that the histidine-tryptophan-glutamate solution was more efficient than the histidine-tryptophan-alphacetoglutarate for the preservation of hearts of rat cardiomyocytes.
Resumo:
TiO2 and TiO2/WO3 electrodes, irradiated by a solar simulator in configurations for heterogeneous photocatalysis (HP) and electrochemically-assisted HP (EHP), were used to remediate aqueous solutions containing 10 mg L(-1) (34 μmol L(-1)) of 17-α-ethinylestradiol (EE2), active component of most oral contraceptives. The photocatalysts consisted of 4.5 μm thick porous films of TiO2 and TiO2/WO3 (molar ratio W/Ti of 12%) deposited on transparent electrodes from aqueous suspensions of TiO2 particles and WO3 precursors, followed by thermal treatment at 450 (°)C. First, an energy diagram was organized with photoelectrochemical and UV-Vis absorption spectroscopy data and revealed that EE2 could be directly oxidized by the photogenerated holes at the semiconductor surfaces, considering the relative HOMO level for EE2 and the semiconductor valence band edges. Also, for the irradiated hybrid photocatalyst, electrons in TiO2 should be transferred to WO3 conduction band, while holes move toward TiO2 valence band, improving charge separation. The remediated EE2 solutions were analyzed by fluorescence, HPLC and total organic carbon measurements. As expected from the energy diagram, both photocatalysts promoted the EE2 oxidation in HP configuration; after 4 h, the EE2 concentration decayed to 6.2 mg L(-1) (35% of EE2 removal) with irradiated TiO2 while TiO2/WO3 electrode resulted in 45% EE2 removal. A higher performance was achieved in EHP systems, when a Pt wire was introduced as a counter-electrode and the photoelectrodes were biased at +0.7 V; then, the EE2 removal corresponded to 48 and 54% for the TiO2 and TiO2/WO3, respectively. The hybrid TiO2/WO3, when compared to TiO2 electrode, exhibited enhanced sunlight harvesting and improved separation of photogenerated charge carriers, resulting in higher performance for removing this contaminant of emerging concern from aqueous solution.
Resumo:
The human mitochondrial Hsp70, also called mortalin, is of considerable importance for mitochondria biogenesis and the correct functioning of the cell machinery. In the mitochondrial matrix, mortalin acts in the importing and folding process of nucleus-encoded proteins. The in vivo deregulation of mortalin expression and/or function has been correlated with age-related diseases and certain cancers due to its interaction with the p53 protein. In spite of its critical biological roles, structural and functional studies on mortalin are limited by its insoluble recombinant production. This study provides the first report of the production of folded and soluble recombinant mortalin when co-expressed with the human Hsp70-escort protein 1, but it is still likely prone to self-association. The monomeric fraction of mortalin presented a slightly elongated shape and basal ATPase activity that is higher than that of its cytoplasmic counterpart Hsp70-1A, suggesting that it was obtained in the functional state. Through small angle X-ray scattering, we assessed the low-resolution structural model of monomeric mortalin that is characterized by an elongated shape. This model adequately accommodated high resolution structures of Hsp70 domains indicating its quality. We also observed that mortalin interacts with adenosine nucleotides with high affinity. Thermally induced unfolding experiments indicated that mortalin is formed by at least two domains and that the transition is sensitive to the presence of adenosine nucleotides and that this process is dependent on the presence of Mg2+ ions. Interestingly, the thermal-induced unfolding assays of mortalin suggested the presence of an aggregation/association event, which was not observed for human Hsp70-1A, and this finding may explain its natural tendency for in vivo aggregation. Our study may contribute to the structural understanding of mortalin as well as to contribute for its recombinant production for antitumor compound screenings.
Resumo:
Accelerated stability tests are indicated to assess, within a short time, the degree of chemical degradation that may affect an active substance, either alone or in a formula, under normal storage conditions. This method is based on increased stress conditions to accelerate the rate of chemical degradation. Based on the equation of the straight line obtained as a function of the reaction order (at 50 and 70 ºC) and using Arrhenius equation, the speed of the reaction was calculated for the temperature of 20 ºC (normal storage conditions). This model of accelerated stability test makes it possible to predict the chemical stability of any active substance at any given moment, as long as the method to quantify the chemical substance is available. As an example of the applicability of Arrhenius equation in accelerated stability tests, a 2.5% sodium hypochlorite solution was analyzed due to its chemical instability. Iodometric titration was used to quantify free residual chlorine in the solutions. Based on data obtained keeping this solution at 50 and 70 ºC, using Arrhenius equation and considering 2.0% of free residual chlorine as the minimum acceptable threshold, the shelf-life was equal to 166 days at 20 ºC. This model, however, makes it possible to calculate shelf-life at any other given temperature.
Resumo:
We have considered a Bose gas in an anisotropic potential. Applying the the Gross-Pitaevskii Equation (GPE) for a confined dilute atomic gas, we have used the methods of optimized perturbation theory and self-similar root approximants, to obtain an analytical formula for the critical number of particles as a function of the anisotropy parameter for the potential. The spectrum of the GPE is also discussed.
Resumo:
The vials filled with Fricke solutions were doped with increasing concentrations of Photogem®, used in photodynamic therapy. These vials were then irradiated with low-energy X-rays with doses ranging from 5 to 20 Gy. The conventional Fricke solution was also irradiated with the same doses. The concentration of ferric ions for the Fricke and doped-Fricke irradiated solutions were measured in a spectrophotometer at 220 to 340 nm. The results showed that there was an enhancement in the response of the doped-Fricke solution, which was proportional to the concentration of the photosensitizer. The use of such procedure for studying the radiosensitizing property of photosensitizers based on the production of free radicals is also discussed.
Resumo:
It is proven that the field equations of a previously studied metric nonsymmetric theory of gravitation do not admit any non-singular stationary solution which represents a field of non-vanishing total mass and non-vanishing total fermionic charge.