995 resultados para Optimization software
Resumo:
Pharmacokinetic variability in drug levels represent for some drugs a major determinant of treatment success, since sub-therapeutic concentrations might lead to toxic reactions, treatment discontinuation or inefficacy. This is true for most antiretroviral drugs, which exhibit high inter-patient variability in their pharmacokinetics that has been partially explained by some genetic and non-genetic factors. The population pharmacokinetic approach represents a very useful tool for the description of the dose-concentration relationship, the quantification of variability in the target population of patients and the identification of influencing factors. It can thus be used to make predictions and dosage adjustment optimization based on Bayesian therapeutic drug monitoring (TDM). This approach has been used to characterize the pharmacokinetics of nevirapine (NVP) in 137 HIV-positive patients followed within the frame of a TDM program. Among tested covariates, body weight, co-administration of a cytochrome (CYP) 3A4 inducer or boosted atazanavir as well as elevated aspartate transaminases showed an effect on NVP elimination. In addition, genetic polymorphism in the CYP2B6 was associated with reduced NVP clearance. Altogether, these factors could explain 26% in NVP variability. Model-based simulations were used to compare the adequacy of different dosage regimens in relation to the therapeutic target associated with treatment efficacy. In conclusion, the population approach is very useful to characterize the pharmacokinetic profile of drugs in a population of interest. The quantification and the identification of the sources of variability is a rational approach to making optimal dosage decision for certain drugs administered chronically.
Resumo:
This paper examines statistical analysis of social reciprocity at group, dyadic, and individual levels. Given that testing statistical hypotheses regarding social reciprocity can be also of interest, a statistical procedure based on Monte Carlo sampling has been developed and implemented in R in order to allow social researchers to describe groups and make statistical decisions.
Resumo:
Abstract
Resumo:
A headspace solid-phase microextraction procedure (HS-SPME) was developed for the profiling of traces present in 3,4-methylenedioxymethylampethamine (MDMA). Traces were first extracted using HS-SPME and then analyzed by gas chromatography-mass spectroscopy (GC-MS). The HS-SPME conditions were optimized using varying conditions. Optimal results were obtained when 40 mg of crushed MDMA sample was heated at 80 °C for 15 min, followed by extraction at 80 °C for 15 min with a polydimethylsiloxane/divinylbenzene coated fibre. A total of 31 compounds were identified as traces related to MDMA synthesis, namely precursors, intermediates or by-products. In addition some fatty acids used as tabletting materials and caffeine used as adulterant, were also detected. The use of a restricted set of 10 target compounds was also proposed for developing a screening tool for clustering samples having close profile. 114 seizures were analyzed using an SPME auto-sampler (MultiPurpose Samples MPS2), purchased from Gerstel GMBH & Co. (Germany), and coupled to GC-MS. The data was handled using various pre-treatment methods, followed by the study of similarities between sample pairs based on the Pearson correlation. The results show that HS-SPME, coupled with the suitable statistical method is a powerful tool for distinguishing specimens coming from the same seizure and specimens coming from different seizures. This information can be used by law enforcement personnel to visualize the ecstasy distribution network as well as the clandestine tablet manufacturing.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flow computation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we de- velop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional mul- tilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrec- tional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimiza- tion search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow com- putation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
La formació de traductors implica l´ús de procediments i eines que permetin els estudiants familiaritzar-se amb contextos professionals. El software lliure especialitzat inclou eines de qualitat professional i procediments accessibles per a les institucions acadèmiques i els estudiants a distància que treballen a casa seva. Els projectes reals que utilitzen software lliure i traducció col·laborativa (crowdsourcing) constitueixen recursos indispensables en la formació de traductors.
Resumo:
Debido a la necesidad de diferenciarse y hacer frente a la competencia, las empresas han apostado por desarrollar operaciones que den valor al cliente, por eso muchas de ellas han visto en las herramientas lean la oportunidad para mejorar sus operaciones. Esta mejora implica la reducción de dinero, personas, equipos grandes, inventario y espacio, con dos objetivos: eliminar despilfarro y reducir la variabilidad. Para conseguir los objetivos estratégicos de la empresa es imprescindible qué éstos estén alineados con los planes de la gerencia a nivel medio y a su vez con el trabajo realizado por los empleados para asegurar que cada persona está alineada en la misma dirección y al mismo tiempo. Ésta es la filosofía de la planificación estratégica. Por ello uno de los objetivos de este proyecto será el desarrollar una herramienta que facilite la exposición de los objetivos de la empresa y la comunicación de los mismos a todos los niveles de la organización para a partir de ellos y tomando como referencia la necesidad de reducir inventarios en la cadena de suministro se realizará un estudio de la producción de un componente de control del aerogenerador para conseguir nivelarla y reducir su inventario de producto terminado. Los objetivos particulares en este apartado serán reducir el inventario en un 28%, nivelar la producción reduciendo la variabilidad del 31% al 24%, mantener un stock máximo de 24 unidades garantizando el suministro ante una demanda variable, incrementar la rotación del inventario en un 10% y establecer un plan de acción para reducir el lead time entre un 40-50%. Todo ello será posible gracias a la realización del mapa de valor presente y futuro para eliminar desperdicios y crear un flujo continuo y el cálculo de un supermercado que mantenga el stock en un nivel óptimo.
Resumo:
En la era digital actual, Internet forma parte de nuestras vidas, y ha aportado cambios a lasociedad globalizada. Algunos de estos cambios nos permiten nuevas formas de relacionarnos y degestionar el conocimiento, dando sentido al término que hoy entendemos como sociedad-red.Por eso, en el entorno que nos envuelve existen continuamente acciones colaborativas globales quefomentan la comunicación y se comparte información de diversos tipos, con la finalidad deaprender y mantenerse constantemente informado. Específicamente, los centros educativos no sequedan al margen ya que requiere preparar estudiantes para esta sociedad.Estos cambios en la sociedad presentan grandes desafíos para el centro educativo, que nopermiten ser afrontados solamente desde el aula. Los centros requieren adaptarse a un modelocompatible con la sociedad-red, y por ello, se sugieren un modelo centro-red, que presente unaestructura de una organización compatible con la era en el que estamos inmersos.Las redes de colaboración en los centros permite intercambiar información y aportar valor a laeducación con el objetivo de la mejora educativa. En este sentido, los centros educativos debendisponer de características que permitan ser flexibles, adaptarse a los agentes y organizaciones quele envuelven. Pero la estructura actual de un centro educativo es rígida y por tanto esta evoluciónrepresenta uno de los mayores desafíos para el sistema educativo.En esta linea, en los centros de Formación Profesional existe una tendencia hacia modeloscolaborativos con el tejido empresarial, entre otros agentes, y es en este punto donde este proyectopretende centrar el foco de la investigación. Con más exactitud, en la creación de una red decolaboración con el agente que el centro educativo seleccione.Específicamente las TIC forman un papel esencial, y se deben poner al servicio del problemaque apuntábamos para ayudar a solventarlo. En este sentido, es adecuado un diseño del artefactocon Software Libre que tiene múltiples beneficios para este objetivo, pero que destacamos el que ami parecer es el más importante; la vinculación con la filosofía de compartir el conocimiento, quegarantiza la simbiosis con la red colaborativa y es por esta razón que el tema de la investigación esrelevante para el centro educativo.Tal y como se mencionaba previamente, las TIC pueden ayudar a fomentar la red colaborativa,pero no sólo el artefacto TIC generado en este proyecto debe cumplir características como laflexibilidad, también es crítico que el centro educativo y los agentes de la red interioricen la culturacolaborativa en sus acciones con la implicación y compromiso que se requiere. Pero como podemosPágina 6Universitat Oberta de Catalunya Trabajo Final de Máster - Software Libreimaginar, ese cambio de cultura, no es una tarea sencilla y presenta problemas. Para mitigarlos yfomentar la cultura en red, se requieren procesos específicos que permitan incorporarla en la medidade lo posible. Para ello, la combinación de la innovación sistémica y el diseño de la investigación eneducación resultan metodologías apropiadas.Por eso, investigaremos durante este proceso cómo las redes de colaboración y el SoftwareLibre permiten adaptar el centro al entorno, cómo pueden ayudar al centro a potenciar la FormaciónProfesional y garantizar la durabilidad de las acciones, con el objetivo que perdure el conocimientoy la propia red de colaboración para una mejora educativa.
Resumo:
Este proyecto busca analizar, diseñar e implementar una nueva solución de telefonía para el Centro Social de Oficiales de la Policía Nacional contemplando la posibilidad de optar por una migración hacia un sistema VoIP bajo software libre con Asterisk. En consecuencia, se deben evaluar las tecnologías actuales buscando proveer nuevas funcionalidades en el servicio telefónico generando bajos costos en su implementación, funcionamiento y mantenimiento.
Resumo:
As a result of forensic investigations of problems across Iowa, a research study was developed aimed at providing solutions to identified problems through better management and optimization of the available pavement geotechnical materials and through ground improvement, soil reinforcement, and other soil treatment techniques. The overall goal was worked out through simple laboratory experiments, such as particle size analysis, plasticity tests, compaction tests, permeability tests, and strength tests. A review of the problems suggested three areas of study: pavement cracking due to improper management of pavement geotechnical materials, permeability of mixed-subgrade soils, and settlement of soil above the pipe due to improper compaction of the backfill. This resulted in the following three areas of study: (1) The optimization and management of earthwork materials through general soil mixing of various select and unsuitable soils and a specific example of optimization of materials in earthwork construction by soil mixing; (2) An investigation of the saturated permeability of compacted glacial till in relation to validation and prediction with the Enhanced Integrated Climatic Model (EICM); and (3) A field investigation and numerical modeling of culvert settlement. For each area of study, a literature review was conducted, research data were collected and analyzed, and important findings and conclusions were drawn. It was found that optimum mixtures of select and unsuitable soils can be defined that allow the use of unsuitable materials in embankment and subgrade locations. An improved model of saturated hydraulic conductivity was proposed for use with glacial soils from Iowa. The use of proper trench backfill compaction or the use of flowable mortar will reduce the potential for developing a bump above culverts.
Resumo:
This manual describes how to use the Iowa Bridge Backwater software. It also documents the methods and equations used for the calculations. The main body describes how to use the software and the appendices cover technical aspects. The Bridge Backwater software performs 5 main tasks: Design Discharge Estimation; Stream Rating Curves; Floodway Encroachment; Bridge Backwater; and Bridge Scour. The intent of this program is to provide a simplified method for analysis of bridge backwater for rural structures located in areas with low flood damage potential. The software is written in Microsoft Visual Basic 6.0. It will run under Windows 95 or newer versions (i.e. Windows 98, NT, 2000, XP and later).
Resumo:
The objective of this work was to develop a genetic transformation system for tropical maize genotypes via particle bombardment of immature zygotic embryos. Particle bombardment was carried out using a genetic construct with bar and uidA genes under control of CaMV35S promoter. The best conditions to transform maize tropical inbred lines L3 and L1345 were obtained when immature embryos were cultivated, prior to the bombardment, in higher osmolarity during 4 hours and bombarded at an acceleration helium gas pressure of 1,100 psi, two shots per plate, and a microcarrier flying distance of 6.6 cm. Transformation frequencies obtained using these conditions ranged from 0.9 to 2.31%. Integration of foreign genes into the genome of maize plants was confirmed by Southern blot analysis as well as bar and uidA gene expressions. The maize genetic transformation protocol developed in this work will possibly improve the efficiency to produce new transgenic tropical maize lines expressing desirable agronomic characteristics.
Resumo:
En la actualidad las tecnologías de la información son utilizadas en todos los ámbitos empresariales. Desde sistemas de gestión (ERPs) pasando por la gestión documental, el análisis de información con sistema de Bussines Intelligence, pudiendo incluso convertirse en toda una nueva plataforma para proveer a las empresas de nuevos canales de venta, como es el caso deInternet.De la necesidad inicial de nuestro cliente en comenzar a expandirse por un nuevo canal de venta para poder llegar a nuevos mercados y diversificar sus clientes se inicia la motivación de este TFC.Dadas las características actuales de las tecnologías de la información e internet, estas conforman un binomio perfecto para definir este TFC que trata todos los aspectos necesarios para llegar a obtener un producto final como es un portal web inmobiliario adaptado a los requisitos demandados por los usuarios actuales de Internet.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.