939 resultados para PARTITIONING


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las transformaciones martensíticas (MT) se definen como un cambio en la estructura del cristal para formar una fase coherente o estructuras de dominio multivariante, a partir de la fase inicial con la misma composición, debido a pequeños intercambios o movimientos atómicos cooperativos. En el siglo pasado se han descubierto MT en diferentes materiales partiendo desde los aceros hasta las aleaciones con memoria de forma, materiales cerámicos y materiales inteligentes. Todos muestran propiedades destacables como alta resistencia mecánica, memoria de forma, efectos de superelasticidad o funcionalidades ferroicas como la piezoelectricidad, electro y magneto-estricción etc. Varios modelos/teorías se han desarrollado en sinergia con el desarrollo de la física del estado sólido para entender por qué las MT generan microstructuras muy variadas y ricas que muestran propiedades muy interesantes. Entre las teorías mejor aceptadas se encuentra la Teoría Fenomenológica de la Cristalografía Martensítica (PTMC, por sus siglas en inglés) que predice el plano de hábito y las relaciones de orientación entre la austenita y la martensita. La reinterpretación de la teoría PTMC en un entorno de mecánica del continuo (CM-PTMC) explica la formación de los dominios de estructuras multivariantes, mientras que la teoría de Landau con dinámica de inercia desentraña los mecanismos físicos de los precursores y otros comportamientos dinámicos. La dinámica de red cristalina desvela la reducción de la dureza acústica de las ondas de tensión de red que da lugar a transformaciones débiles de primer orden en el desplazamiento. A pesar de las diferencias entre las teorías estáticas y dinámicas dado su origen en diversas ramas de la física (por ejemplo mecánica continua o dinámica de la red cristalina), estas teorías deben estar inherentemente conectadas entre sí y mostrar ciertos elementos en común en una perspectiva unificada de la física. No obstante las conexiones físicas y diferencias entre las teorías/modelos no se han tratado hasta la fecha, aun siendo de importancia crítica para la mejora de modelos de MT y para el desarrollo integrado de modelos de transformaciones acopladas de desplazamiento-difusión. Por lo tanto, esta tesis comenzó con dos objetivos claros. El primero fue encontrar las conexiones físicas y las diferencias entre los modelos de MT mediante un análisis teórico detallado y simulaciones numéricas. El segundo objetivo fue expandir el modelo de Landau para ser capaz de estudiar MT en policristales, en el caso de transformaciones acopladas de desplazamiento-difusión, y en presencia de dislocaciones. Comenzando con un resumen de los antecedente, en este trabajo se presentan las bases físicas de los modelos actuales de MT. Su capacidad para predecir MT se clarifica mediante el ansis teórico y las simulaciones de la evolución microstructural de MT de cúbicoatetragonal y cúbicoatrigonal en 3D. Este análisis revela que el modelo de Landau con representación irreducible de la deformación transformada es equivalente a la teoría CM-PTMC y al modelo de microelasticidad para predecir los rasgos estáticos durante la MT, pero proporciona una mejor interpretación de los comportamientos dinámicos. Sin embargo, las aplicaciones del modelo de Landau en materiales estructurales están limitadas por su complejidad. Por tanto, el primer resultado de esta tesis es el desarrollo del modelo de Landau nolineal con representación irreducible de deformaciones y de la dinámica de inercia para policristales. La simulación demuestra que el modelo propuesto es consistente fcamente con el CM-PTMC en la descripción estática, y también permite una predicción del diagrama de fases con la clásica forma ’en C’ de los modos de nucleación martensítica activados por la combinación de temperaturas de enfriamiento y las condiciones de tensión aplicada correlacionadas con la transformación de energía de Landau. Posteriomente, el modelo de Landau de MT es integrado con un modelo de transformación de difusión cuantitativa para elucidar la relajación atómica y la difusión de corto alcance de los elementos durante la MT en acero. El modelo de transformaciones de desplazamiento y difusión incluye los efectos de la relajación en borde de grano para la nucleación heterogenea y la evolución espacio-temporal de potenciales de difusión y movilidades químicas mediante el acoplamiento de herramientas de cálculo y bases de datos termo-cinéticos de tipo CALPHAD. El modelo se aplica para estudiar la evolución microstructural de aceros al carbono policristalinos procesados por enfriamiento y partición (Q&P) en 2D. La microstructura y la composición obtenida mediante la simulación se comparan con los datos experimentales disponibles. Los resultados muestran el importante papel jugado por las diferencias en movilidad de difusión entre la fase austenita y martensita en la distibución de carbono en las aceros. Finalmente, un modelo multi-campo es propuesto mediante la incorporación del modelo de dislocación en grano-grueso al modelo desarrollado de Landau para incluir las diferencias morfológicas entre aceros y aleaciones con memoria de forma con la misma ruptura de simetría. La nucleación de dislocaciones, la formación de la martensita ’butterfly’, y la redistribución del carbono después del revenido son bien representadas en las simulaciones 2D del estudio de la evolución de la microstructura en aceros representativos. Con dicha simulación demostramos que incluyendo las dislocaciones obtenemos para dichos aceros, una buena comparación frente a los datos experimentales de la morfología de los bordes de macla, la existencia de austenita retenida dentro de la martensita, etc. Por tanto, basado en un modelo integral y en el desarrollo de códigos durante esta tesis, se ha creado una herramienta de modelización multiescala y multi-campo. Dicha herramienta acopla la termodinámica y la mecánica del continuo en la macroescala con la cinética de difusión y los modelos de campo de fase/Landau en la mesoescala, y también incluye los principios de la cristalografía y de la dinámica de red cristalina en la microescala. ABSTRACT Martensitic transformation (MT), in a narrow sense, is defined as the change of the crystal structure to form a coherent phase, or multi-variant domain structures out from a parent phase with the same composition, by small shuffles or co-operative movements of atoms. Over the past century, MTs have been discovered in different materials from steels to shape memory alloys, ceramics, and smart materials. They lead to remarkable properties such as high strength, shape memory/superelasticity effects or ferroic functionalities including piezoelectricity, electro- and magneto-striction, etc. Various theories/models have been developed, in synergy with development of solid state physics, to understand why MT can generate these rich microstructures and give rise to intriguing properties. Among the well-established theories, the Phenomenological Theory of Martensitic Crystallography (PTMC) is able to predict the habit plane and the orientation relationship between austenite and martensite. The re-interpretation of the PTMC theory within a continuum mechanics framework (CM-PTMC) explains the formation of the multivariant domain structures, while the Landau theory with inertial dynamics unravels the physical origins of precursors and other dynamic behaviors. The crystal lattice dynamics unveils the acoustic softening of the lattice strain waves leading to the weak first-order displacive transformation, etc. Though differing in statics or dynamics due to their origins in different branches of physics (e.g. continuum mechanics or crystal lattice dynamics), these theories should be inherently connected with each other and show certain elements in common within a unified perspective of physics. However, the physical connections and distinctions among the theories/models have not been addressed yet, although they are critical to further improving the models of MTs and to develop integrated models for more complex displacivediffusive coupled transformations. Therefore, this thesis started with two objectives. The first one was to reveal the physical connections and distinctions among the models of MT by means of detailed theoretical analyses and numerical simulations. The second objective was to expand the Landau model to be able to study MTs in polycrystals, in the case of displacive-diffusive coupled transformations, and in the presence of the dislocations. Starting with a comprehensive review, the physical kernels of the current models of MTs are presented. Their ability to predict MTs is clarified by means of theoretical analyses and simulations of the microstructure evolution of cubic-to-tetragonal and cubic-to-trigonal MTs in 3D. This analysis reveals that the Landau model with irreducible representation of the transformed strain is equivalent to the CM-PTMC theory and microelasticity model to predict the static features during MTs but provides better interpretation of the dynamic behaviors. However, the applications of the Landau model in structural materials are limited due its the complexity. Thus, the first result of this thesis is the development of a nonlinear Landau model with irreducible representation of strains and the inertial dynamics for polycrystals. The simulation demonstrates that the updated model is physically consistent with the CM-PTMC in statics, and also permits a prediction of a classical ’C shaped’ phase diagram of martensitic nucleation modes activated by the combination of quenching temperature and applied stress conditions interplaying with Landau transformation energy. Next, the Landau model of MT is further integrated with a quantitative diffusional transformation model to elucidate atomic relaxation and short range diffusion of elements during the MT in steel. The model for displacive-diffusive transformations includes the effects of grain boundary relaxation for heterogeneous nucleation and the spatio-temporal evolution of diffusion potentials and chemical mobility by means of coupling with a CALPHAD-type thermo-kinetic calculation engine and database. The model is applied to study for the microstructure evolution of polycrystalline carbon steels processed by the Quenching and Partitioning (Q&P) process in 2D. The simulated mixed microstructure and composition distribution are compared with available experimental data. The results show that the important role played by the differences in diffusion mobility between austenite and martensite to the partitioning in carbon steels. Finally, a multi-field model is proposed by incorporating the coarse-grained dislocation model to the developed Landau model to account for the morphological difference between steels and shape memory alloys with same symmetry breaking. The dislocation nucleation, the formation of the ’butterfly’ martensite, and the redistribution of carbon after tempering are well represented in the 2D simulations for the microstructure evolution of the representative steels. With the simulation, we demonstrate that the dislocations account for the experimental observation of rough twin boundaries, retained austenite within martensite, etc. in steels. Thus, based on the integrated model and the in-house codes developed in thesis, a preliminary multi-field, multiscale modeling tool is built up. The new tool couples thermodynamics and continuum mechanics at the macroscale with diffusion kinetics and phase field/Landau model at the mesoscale, and also includes the essentials of crystallography and crystal lattice dynamics at microscale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The important role of furin in the proteolytic activation of many pathogenic molecules has made this endoprotease a target for the development of potent and selective antiproteolytic agents. Here, we demonstrate the utility of the protein-based inhibitor α1-antitrypsin Portland (α1-PDX) as an antipathogenic agent that can be used prophylactically to block furin-dependent cell killing by Pseudomonas exotoxin A. Biochemical analysis of the specificity of a bacterially expressed His- and FLAG-tagged α1-PDX (α1-PDX/hf) revealed the selectivity of the α1-PDX/hf reactive site loop for furin (Ki, 600 pM) but not for other proprotein convertase family members or other unrelated endoproteases. Kinetic studies show that α1-PDX/hf inhibits furin by a slow tight-binding mechanism characteristic of serpin molecules and functions as a suicide substrate inhibitor. Once bound to furin’s active site, α1-PDX/hf partitions with equal probability to undergo proteolysis by furin at the C-terminal side of the reactive center -Arg355-Ile-Pro-Arg358-↓ or to form a kinetically trapped SDS-stable complex with the enzyme. This partitioning between the complex-forming and proteolytic pathways contributes to the ability of α1-PDX/hf to differentially inhibit members of the proprotein convertase family. Finally, we propose a structural model of the α1-PDX-reactive site loop that explains the high degree of enzyme selectivity of this serpin and which can be used to generate small molecule furin inhibitors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kinesin is a dimeric motor protein that transports organelles in a stepwise manner toward the plus-end of microtubules by converting the energy of ATP hydrolysis into mechanical work. External forces can influence the behavior of kinesin, and force-velocity curves have shown that the motor will slow down and eventually stall under opposing loads of ≈5 pN. Using an in vitro motility assay in conjunction with a high-resolution optical trapping microscope, we have examined the behavior of individual kinesin molecules under two previously unexplored loading regimes: super-stall loads (>5 pN) and forward (plus-end directed) loads. Whereas some theories of kinesin function predict a reversal of directionality under high loads, we found that kinesin does not walk backwards under loads of up to 13 pN, probably because of an irreversible transition in the mechanical cycle. We also found that this cycle can be significantly accelerated by forward loads under a wide range of ATP concentrations. Finally, we noted an increase in kinesin’s rate of dissociation from the microtubule with increasing load, which is consistent with a load dependent partitioning between two recently described kinetic pathways: a coordinated-head pathway (which leads to stepping) and an independent-head pathway (which is static).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By means of optical pumping with laser light it is possible to enhance the nuclear spin polarization of gaseous xenon by four to five orders of magnitude. The enhanced polarization has allowed advances in nuclear magnetic resonance (NMR) spectroscopy and magnetic resonance imaging (MRI), including polarization transfer to molecules and imaging of lungs and other void spaces. A critical issue for such applications is the delivery of xenon to the sample while maintaining the polarization. Described herein is an efficient method for the introduction of laser-polarized xenon into systems of biological and medical interest for the purpose of obtaining highly enhanced NMR/MRI signals. Using this method, we have made the first observation of the time-resolved process of xenon penetrating the red blood cells in fresh human blood—the xenon residence time constant in the red blood cells was measured to be 20.4 ± 2 ms. The potential of certain biologically compatible solvents for delivery of laser-polarized xenon to tissues for NMR/MRI is discussed in light of their respective relaxation and partitioning properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Budding and vesiculation of erythrocyte membranes occurs by a process involving an uncoupling of the membrane skeleton from the lipid bilayer. Vesicle formation provides an important means whereby protein sorting and trafficking can occur. To understand the mechanism of sorting at the molecular level, we have developed a micropipette technique to quantify the redistribution of fluorescently labeled erythrocyte membrane components during mechanically induced membrane deformation and vesiculation. Our previous studies indicated that the spectrin-based membrane skeleton deforms elastically, producing a constant density gradient during deformation. Our current studies showed that during vesiculation the skeleton did not fragment but rather retracted to the cell body, resulting in a vesicle completely depleted of skeleton. These local changes in skeletal density regulated the sorting of nonskeletal membrane components. Highly mobile membrane components, phosphatidylethanolamine- and glycosylphosphatidylinositol-linked CD59 with no specific skeletal association were enriched in the vesicle. In contrast, two components with known specific skeletal association, band 3 and glycophorin A, were differentially depleted in vesicles. Increasing the skeletal association of glycophorin A by liganding its extrafacial domain reduced the fraction partitioning to the vesicle. We conclude that this technique of bilayer/skeleton uncoupling provides a means with which to study protein sorting driven by changes in local skeletal density. Moreover, it is the interaction of particular membrane components with the spectrin-based skeleton that determines molecular partitioning during protein sorting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Golgi membranes in Drosophila embryos and tissue culture cells are found as discrete units dispersed in the cytoplasm. We provide evidence that Golgi membranes do not undergo any dramatic change in their organization during the rapid mitotic divisions of the nuclei in the syncitial embryo or during cell division postcellularization. By contrast, in Drosophila tissue culture cells, the Golgi membranes undergo complete fragmentation during mitosis. Our studies show that the mechanism of Golgi partitioning during cell division is cell type-specific.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partitioning of the mammalian Golgi apparatus during cell division involves disassembly at M-phase. Despite the importance of the disassembly/reassembly pathway in Golgi biogenesis, it remains unclear whether mitotic Golgi breakdown in vivo proceeds by direct vesiculation or involves fusion with the endoplasmic reticulum (ER). To test whether mitotic Golgi is fused with the ER, we compared the distribution of ER and Golgi proteins in interphase and mitotic HeLa cells by immunofluorescence microscopy, velocity gradient fractionation, and density gradient fractionation. While mitotic ER appeared to be a fine reticulum excluded from the region containing the spindle-pole body, mitotic Golgi appeared to be dispersed small vesicles that penetrated the area containing spindle microtubules. After cell disruption, M-phase Golgi was recovered in two size classes. The major breakdown product, accounting for at least 75% of the Golgi, was a population of 60-nm vesicles that were completely separated from the ER using velocity gradient separation. The minor breakdown product was a larger, more heterogenously sized, membrane population. Double-label fluorescence analysis of these membranes indicated that this portion of mitotic Golgi also lacked detectable ER marker proteins. Therefore we conclude that the ER and Golgi remain distinct at M-phase in HeLa cells. To test whether the 60-nm vesicles might form from the ER at M-phase as the result of a two-step vesiculation pathway involving ER–Golgi fusion followed by Golgi vesicle budding, mitotic cells were generated with fused ER and Golgi by brefeldin A treatment. Upon brefeldin A removal, Golgi vesicles did not emerge from the ER. In contrast, the Golgi readily reformed from similarly treated interphase cells. We conclude that Golgi-derived vesicles remain distinct from the ER in mitotic HeLa cells, and that mitotic cells lack the capacity of interphase cells for Golgi reemergence from the ER. These experiments suggest that mitotic Golgi breakdown proceeds by direct vesiculation independent of the ER.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The volumic rearrangement of both chromosomes and immunolabeled upstream binding factor in entire well-preserved mitotic cells was studied by confocal microscopy. By using high-quality three-dimensional visualization and tomography, it was possible to investigate interactively the volumic organization of chromosome sets and to focus on their internal characteristics. More particularly, this study demonstrates the nonrandom positioning of metaphase chromosomes bearing nucleolar organizer regions as revealed by their positive upstream binding factor immunolabeling. During the complex morphogenesis of the progeny nuclei from anaphase to late telophase, the equal partitioning of the nucleolar organizer regions is demonstrated by quantification, and their typical nonrandom central positioning within the chromosome sets is revealed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hormonal activation of Gs, the stimulatory regulator of adenylyl cyclase, promotes dissociation of αs from Gβγ, accelerates removal of covalently attached palmitate from the Gα subunit, and triggers release of a fraction of αs from the plasma membrane into the cytosol. To elucidate relations among these three events, we assessed biochemical effects in vitro of attached palmitate on recombinant αs prepared from Sf9 cells. In comparison to the unpalmitoylated protein (obtained from cytosol of Sf9 cells, treated with a palmitoyl esterase, or expressed as a mutant protein lacking the site for palmitoylation), palmitoylated αs (from Sf9 membranes, 50% palmitoylated) was more hydrophobic, as indicated by partitioning into TX-114, and bound βγ with 5-fold higher affinity. βγ protected GDP-bound αs, but not αs· GTP[γS], from depalmitoylation by a recombinant esterase. We conclude that βγ binding and palmitoylation reciprocally potentiate each other in promoting membrane attachment of αs and that dissociation of αs·GTP from βγ is likely to mediate receptor-induced αs depalmitoylation and translocation of the protein to cytosol in intact cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

GroEL is an allosteric protein that facilitates protein folding in an ATP-dependent manner. Herein, the relationship between cooperative ATP binding by GroEL and the kinetics of GroE-assisted folding of two substrates with different GroES dependence, mouse dihydrofolate reductase (mDHFR) and mitochondrial malate dehydrogenase, is examined by using cooperativity mutants of GroEL. Strong intra-ring positive cooperativity in ATP binding by GroEL decreases the rate of GroEL-assisted mDHFR folding owing to a slow rate of the ATP-induced transition from the protein-acceptor state to the protein-release state. Inter-ring negative cooperativity in ATP binding by GroEL is found to affect the kinetic partitioning of mDHFR, but not of mitochondrial malate dehydrogenase, between folding in solution and folding in the cavity underneath GroES. Our results show that protein folding by this “two-stroke motor” is coupled to cooperative ATP binding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I attempt to reconcile apparently conflicting factors and mechanisms that have been proposed to determine the rate constant for two-state folding of small proteins, on the basis of general features of the structures of transition states. Φ-Value analysis implies a transition state for folding that resembles an expanded and distorted native structure, which is built around an extended nucleus. The nucleus is composed predominantly of elements of partly or well-formed native secondary structure that are stabilized by local and long-range tertiary interactions. These long-range interactions give rise to connecting loops, frequently containing the native loops that are poorly structured. I derive an equation that relates differences in the contact order of a protein to changes in the length of linking loops, which, in turn, is directly related to the unfavorable free energy of the loops in the transition state. Kinetic data on loop extension mutants of CI2 and α-spectrin SH3 domain fit the equation qualitatively. The rate of folding depends primarily on the interactions that directly stabilize the nucleus, especially those in native-like secondary structure and those resulting from the entropy loss from the connecting loops, which vary with contact order. This partitioning of energy accounts for the success of some algorithms that predict folding rates, because they use these principles either explicitly or implicitly. The extended nucleus model thus unifies the observations of rate depending on both stability and topology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, we compared the transport of newly synthesized cholesterol with that of influenza virus hemagglutinin (HA) from the endoplasmic reticulum to the plasma membrane. The arrival of cholesterol on the cell surface was monitored by cyclodextrin removal, and HA transport was monitored by surface trypsinization and endoglycosidase H digestion. We found that disassembly of the Golgi complex by brefeldin A treatment resulted in partial inhibition of cholesterol transport while completely blocking HA transport. Further, microtubule depolymerization by nocodazole inhibited cholesterol and HA transport to a similar extent. When the partitioning of cholesterol into lipid rafts was analyzed, we found that newly synthesized cholesterol began to associate with low-density detergent-resistant membranes rapidly after synthesis, before it was detectable on the cell surface, and its raft association increased further upon chasing. When cholesterol transport was blocked by using 15°C incubation, the association of newly synthesized cholesterol with low-density detergent-insoluble membranes was decreased and cholesterol accumulated in a fraction with intermediate density. Our results provide evidence for the partial contribution of the Golgi complex to the transport of newly synthesized cholesterol to the cell surface and suggest that detergent-resistant membranes are involved in the process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People homozygous for mutations in the Niemann-Pick type C1 (NPC1) gene have physiological defects, including excess accumulation of intracellular cholesterol and other lipids, that lead to drastic neural and liver degeneration. The NPC1 multipass transmembrane protein is resident in late endosomes and lysosomes, but its functions are unknown. We find that organelles containing functional NPC1-fluorescent protein fusions undergo dramatic movements, some in association with extending strands of endoplasmic reticulum. In NPC1 mutant cells the NPC1-bearing organelles that normally move at high speed between perinuclear regions and the periphery of the cell are largely absent. Pulse-chase experiments with dialkylindocarbocyanine low-density lipoprotein showed that NPC1 organelles function late in the endocytic pathway; NPC1 protein may aid the partitioning of endocytic and lysosomal compartments. The close connection between NPC1 and the drug U18666A, which causes NPC1-like organelle defects, was established by rescuing drug-treated cells with overproduced NPC1. U18666A inhibits outward movements of NPC1 organelles, trapping membranes and cholesterol in perinuclear organelles similar to those in NPC1 mutant cells, even when cells are grown in lipoprotein-depleted serum. We conclude that NPC1 protein promotes the creation and/or movement of particular late endosomes, which rapidly transport materials to and from the cell periphery.