869 resultados para Constraint based modeling


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multi-scale, multi-physics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlasbased segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Results of studies of the static and dynamic dielectric properties in rod-like 4-n-octyloxy-4'-cyanobiphenyl (8OCB) with isotropic (I)–nematic (N)–smectic A (SmA)–crystal (Cr) mesomorphism, combined with measurements of the low-frequency nonlinear dielectric effect and heat capacity are presented. The analysis is supported by the derivative-based and distortion-sensitive transformation of experimental data. Evidence for the I–N and N–SmA pretransitional anomalies, indicating the influence of tricritical behavior, is shown. It has also been found that neither the N phase nor the SmA phase are uniform and hallmarks of fluid–fluid crossovers can be detected. The dynamics, tested via the evolution of the primary relaxation time, is clearly non-Arrhenius and described via τ(T) = τc(T−TC)−phgr. In the immediate vicinity of the I–N transition a novel anomaly has been found: Δτ ∝ 1/(T − T*), where T* is the temperature of the virtual continuous transition and Δτ is the excess over the 'background behavior'. Experimental results are confronted with the comprehensive Landau–de Gennes theory based modeling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is expected that climate change will have significant impacts on ecosystems. Most model projections agree that the ocean will experience stronger stratification and less nutrient supply from deep waters. These changes will likely affect marine phytoplankton communities and will thus impact on the higher trophic levels of the oceanic food web. The potential consequences of future climate change on marine microbial communities can be investigated and predicted only with the help of mathematical models. Here we present the application of a model that describes aggregate properties of marine phytoplankton communities and captures the effects of a changing environment on their composition and adaptive capacity. Specifically, the model describes the phytoplankton community in terms of total biomass, mean cell size, and functional diversity. The model is applied to two contrasting regions of the Atlantic Ocean (tropical and temperate) and is tested under two emission scenarios: SRES A2 or “business as usual” and SRES B1 or “local utopia.” We find that all three macroecological properties will decline during the next century in both regions, although this effect will be more pronounced in the temperate region. Being consistent with previous model predictions, our results show that a simple trait-based modeling framework represents a valuable tool for investigating how phytoplankton communities may reorganize under a changing climate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image-based modeling is a popular approach to perform patient-specific biomechanical simulations. Accurate modeling is critical for orthopedic application to evaluate implant design and surgical planning. It has been shown that bone strength can be estimated from the bone mineral density (BMD) and trabecular bone architecture. However, these findings cannot be directly and fully transferred to patient-specific modeling since only BMD can be derived from clinical CT. Therefore, the objective of this study was to propose a method to predict the trabecular bone structure using a µCT atlas and an image registration technique. The approach has been evaluated on femurs and patellae under physiological loading. The displacement and ultimate force for femurs loaded in stance position were predicted with an error of 2.5% and 3.7%, respectively, while predictions obtained with an isotropic material resulted in errors of 7.3% and 6.9%. Similar results were obtained for the patella, where the strain predicted using the registration approach resulted in an improved mean squared error compared to the isotropic model. We conclude that the registration of anisotropic information from of a single template bone enables more accurate patient-specific simulations from clinical image datasets than isotropic model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Basalts drilled from the East Pacific Rise, OCP Ridge, and Siqueiros fracture zone during Leg 54 are texturally diverse. Dolerites are equigranular at Sites 422 and 428 and porphyritic, with phenocrysts of plagioclase (An69.73) and Ca-rich clinopyroxene (Ca42Mg48Fe10) at Site 427. The East Pacific Rise lavas and some of those from the OCP Ridge are fine-grained and porphyritic. The majority of the large crystals are clustered skeletal glomerocrysts of plagioclase An64-77), together with olivine (Fo80-87), Ca-rich clinopyroxene, or both. Euhedral phenocrysts of plagioclase, together with olivine, Carich clinopyroxene, and Cr-Al spinel in some cases, occur in most of the fine-grained lavas. These phenocrysts are small (maximum dimension <1 mm in all but one sample), sparse (combined modal amount <1% in all samples), and distinctive from the megacrysts which characterize many ocean-floor lavas. In two East Pacific Rise lavas, zoned plagioclase (An83 cores) is the sole phenocryst phase. In other porphyritic lavas from all the main East Pacific Rise and OCP Ridge units drilled during Leg 54, the plagioclase phenocrysts contain cores of bytownite (An79-87) surrounded by more-sodic feldspar (An67-77). Core/rim relationships vary from continuous normal zoning, through discontinuous zoning, to extensive resorption of the calcic cores in some samples. The compositions of the plagioclase calcic cores are systematically related to those of the glomerophyric plagioclase and olivine in the lavas containing them. Furthermore, only one compositional population of calcic cores occurs in each rock. The possible causes of these relationships are far from clear. Magma mixing, although superficially applicable, is inconsistent with important aspects of the phenocryst mineralogy of these particular lavas. A more satisfactory model to explain both phenocryst zoning and rapid glomerocryst growth immediately before extrusion may be constructed by postulating influx of water into the upwelling magmas within Layer 3 of the oceanic crust beneath the East Pacific Rise, and subsequent loss of part of this water during effervescence within feeder dykes between Layer 3 and the ocean floor. It is shown that this model is fully consistent with published data on water and carbon dioxide contents and ratios in the pillow-margin glasses, vesicles, and phenocryst inclusions of ocean-floor basalts. The evidence for the precipitation of plagioclase- dominated crystalline assemblages from these magmas in the upper part of Layer 3 is concordant with recent geophysically based modeling of the structure of the East Pacific Rise. Calcium-rich clinopyroxenes in dolerites from the OCP Ridge and Siqueiros fracture zone show radial, oscillatory, and sector-zoning. In Sample 428A-5-2 (Piece 5a), the compositional trends resulting from this zoning closely resemble those of the pyroxenes in some lunar lavas. The controls on crystallization of interstitial pigeonite - epitaxial upon augite - in this rock are discussed. Both sector-zoning of the augite and nucleation of pigeonite within microvolumes of magma with a low Ca(Mg + Fe) ratio appear to be important factors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the SESAR Step 2 concept of operations a RBT is available and seen by all making it possible to conceive a different operating method than the current ATM system based on Collaborative Decisions Making processes. Currently there is a need to describe in more detail the mechanisms by which actors (ATC, Network Management, Flight Crew, airports and Airline Operation Centre) will negotiate revisions to the RBT. This paper introduces a negotiation model, which uses constraint based programing applied to a mediator to facilitate negotiation process in a SWIM enabled environment. Three processes for modelling the negotiation process are explained as well a preliminary reasoning agent algorithm modelled with constraint satisfaction problem is presented. Computational capability of the model is evaluated in the conclusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate optimal strategies to defend valuable goods against the attacks of a thief. Given the value of the goods and the probability of success for the thief, we look for the strategy that assures the largest benefit to each player irrespective of the strategy of his opponent. Two complementary approaches are used: agent-based modeling and game theory. It is shown that the compromise between the value of the goods and the probability of success defines the mixed Nash equilibrium of the game, that is compared with the results of the agent-based simulations and discussed in terms of the system parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyzes issues which appear when supporting pruning operators in tabled LP. A version of the once/1 control predicate tailored for tabled predicates is presented, and an implementation analyzed and evaluated. Using once/1 with answer-on-demand strategies makes it possible to avoid computing unneeded solutions for problems which can benefit from tabled LP but in which only a single solution is needed, such as model checking and planning. The proposed version of once/1 is also directly applicable to the efficient implementation of other optimizations, such as early completion, cut-fail loops (to, e.g., prune at the top level), if-then-else, and constraint-based branch-and-bound optimization. Although once/1 still presents open issues such as dependencies of tabled solutions on program history, our experimental evaluation confirms that it provides an arbitrarily large efficiency improvement in several application areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The discovery of hyperthermophilic microorganisms and the analysis of hyperthermostable enzymes has established the fact that multisubunit enzymes can survive for prolonged periods at temperatures above 100°C. We have carried out homology-based modeling and direct structure comparison on the hexameric glutamate dehydrogenases from the hyperthermophiles Pyrococcus furiosus and Thermococcus litoralis whose optimal growth temperatures are 100°C and 88°C, respectively, to determine key stabilizing features. These enzymes, which are 87% homologous, differ 16-fold in thermal stability at 104°C. We observed that an intersubunit ion-pair network was substantially reduced in the less stable enzyme from T. litoralis, and two residues were then altered to restore these interactions. The single mutations both had adverse effects on the thermostability of the protein. However, with both mutations in place, we observed a fourfold improvement of stability at 104°C over the wild-type enzyme. The catalytic properties of the enzymes were unaffected by the mutations. These results suggest that extensive ion-pair networks may provide a general strategy for manipulating enzyme thermostability of multisubunit enzymes. However, this study emphasizes the importance of the exact local environment of a residue in determining its effects on stability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introdução: Grande parte das ações para promover a atividade física no lazer em populações tem apresentado tamanhos de efeito pequenos ou inexistentes, ou resultados inconsistentes. Abordar o problema a partir da perspectiva sistêmica pode ser uma das formas de superar esse descompasso. Objetivo: Desenvolver um modelo baseado em agentes para investigar a conformação e evolução de padrões populacionais de atividade física no lazer em adultos a partir da interação entre atributos psicológicos dos indivíduos e atributos dos ambientes físico construído e social em que vivem. Métodos: O processo de modelagem foi composto por três etapas: elaboração de um mapa conceitual, com base em revisão da literatura e consulta com especialistas; criação e verificação do algoritmo do modelo; e parametrização e análise de consistência e sensibilidade. Os resultados da revisão da literatura foram consolidados e relatados de acordo com os domínios da busca (aspectos psicológicos, ambiente social e ambiente físico construído). Os resultados quantitativos da consulta com os especialistas foram descritos por meio de frequências e o conteúdo das respostas questões abertas foi analisado e compilado pelo autor desta tese. O algoritmo do modelo foi criado no software NetLogo, versão 5.2.1., seguindo-se um protocolo de verificação para garantir que o algoritmo fosse implementado acuradamente. Nas análises de consistência e sensibilidade, utilizaram-se o Teste A de Vargha-Delaney, coeficiente de correlação de postos parcial, boxplots e gráficos de linha e de dispersão. Resultados: Definiram-se como elementos do mapa conceitual a intenção da pessoa, o comportamento de pessoas próximas e da comunidade, e a percepção da qualidade, do acesso e das atividades disponíveis nos locais em que atividade física no lazer pode ser praticada. O modelo representa uma comunidade hipotética contendo dois tipos de agentes: pessoas e locais em que atividade física no lazer pode ser praticada. As pessoas interagem entre si e com o ambiente construído, gerando tendências temporais populacionais de prática de atividade física no lazer e de intenção. As análises de sensibilidade indicaram que as tendências temporais de atividade física no lazer e de intenção são altamente sensíveis à influência do comportamento atual da pessoa sobre a sua intenção futura, ao tamanho do raio de percepção da pessoa e à proporção de locais em que a atividade física no lazer pode ser praticada. Considerações finais: O mapa conceitual e o modelo baseado em agentes se mostraram adequados para investigar a conformação e evolução de padrões populacionais de atividade física no lazer em adultos. A influência do comportamento da pessoa sobre a sua intenção, o tamanho do raio de percepção da pessoa e a proporção de locais em que a atividade física no lazer pode ser praticada são importantes determinantes da conformação e evolução dos padrões populacionais de atividade física no lazer entre adultos no modelo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fibre overlay is a cost-effective technique to alleviate wavelength blocking in some links of a wavelength-routed optical network by increasing the number of wavelengths in those links. In this letter, we investigate the effects of overlaying fibre in an all-optical network (AON) based on GÉANT2 topology. The constraint-based routing and wavelength assignment (CB-RWA) algorithm locates where cost-efficient upgrades should be implemented. Through numerical examples, we demonstrate that the network capacity improves by 25 per cent by overlaying fibre on 10 per cent of the links, and by 12 per cent by providing hop reduction links comprising 2 per cent of the links. For the upgraded network, we also show the impact of dynamic traffic allocation on the blocking probability. Copyright © 2010 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Industry practitioners are seeking to create optimal logistics networks through more efficient decision-making leading to a shift of power from a centralized position to a more decentralized approach. This has led to researchers, exploring with vigor, the application of agent based modeling (ABM) in supply chains and more recently, its impact on decision-making. This paper investigates reasons for the shift to decentralized decision-making and the impact on supply chains. Effective decentralization of decision-making with ABM and hybrid modeling is investigated, observing the methods and potential of achieving optimality.