872 resultados para Agent-based methodologies


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El propósito de esta tesis es presentar una metodología para realizar análisis de la dinámica en pequeña señal y el comportamiento de sistemas de alimentación distribuidos de corriente continua (CC), formados por módulos comerciales. Para ello se hace uso de un método sencillo que indica los márgenes de estabilidad menos conservadores posibles mediante un solo número. Este índice es calculado en cada una de las interfaces que componen el sistema y puede usarse para obtener un índice global que indica la estabilidad del sistema global. De esta manera se posibilita la comparación de sistemas de alimentación distribuidos en términos de robustez. La interconexión de convertidores CC-CC entre ellos y con los filtros EMI necesarios puede originar interacciones no deseadas que dan lugar a la degradación del comportamiento de los convertidores, haciendo el sistema más propenso a inestabilidades. Esta diferencia en el comportamiento se debe a interacciones entre las impedancias de los diversos elementos del sistema. En la mayoría de los casos, los sistemas de alimentación distribuida están formados por módulos comerciales cuya estructura interna es desconocida. Por ello los análisis presentados en esta tesis se basan en medidas de la respuesta en frecuencia del convertidor que pueden realizarse desde los terminales de entrada y salida del mismo. Utilizando las medidas de las impedancias de entrada y salida de los elementos del sistema, se puede construir una función de sensibilidad que proporciona los márgenes de estabilidad de las diferentes interfaces. En esta tesis se utiliza el concepto del valor máximo de la función de sensibilidad (MPC por sus siglas en inglés) para indicar los márgenes de estabilidad como un único número. Una vez que la estabilidad de todas las interfaces del sistema se han evaluado individualmente, los índices obtenidos pueden combinarse para obtener un único número con el que comparar la estabilidad de diferentes sistemas. Igualmente se han analizado las posibles interacciones en la entrada y la salida de los convertidores CC-CC, obteniéndose expresiones analíticas con las que describir en detalle los acoplamientos generados en el sistema. Los estudios analíticos realizados se han validado experimentalmente a lo largo de la tesis. El análisis presentado en esta tesis se culmina con la obtención de un índice que condensa los márgenes de estabilidad menos conservativos. También se demuestra que la robustez del sistema está asegurada si las impedancias utilizadas en la función de sensibilidad se obtienen justamente en la entrada o la salida del subsistema que está siendo analizado. Por otra parte, la tesis presenta un conjunto de parámetros internos asimilados a impedancias, junto con sus expresiones analíticas, que permiten una explicación detallada de las interacciones en el sistema. Dichas expresiones analíticas pueden obtenerse bien mediante las funciones de transferencia analíticas si se conoce la estructura interna, o utilizando medidas en frecuencia o identificación de las mismas a través de la respuesta temporal del convertidor. De acuerdo a las metodologías presentadas en esta tesis se puede predecir la estabilidad y el comportamiento de sistemas compuestos básicamente por convertidores CC-CC y filtros, cuya estructura interna es desconocida. La predicción se basa en un índice que condensa la información de los márgenes de estabilidad y que permite la obtención de un indicador de la estabilidad global de todo el sistema, permitiendo la comparación de la estabilidad de diferentes arquitecturas de sistemas de alimentación distribuidos. ABSTRACT The purpose of this thesis is to present dynamic small-signal stability and performance analysis methodology for dc-distributed systems consisting of commercial power modules. Furthermore, the objective is to introduce simple method to state the least conservative margins for robust stability as a single number. In addition, an index characterizing the overall system stability is obtained, based on which different dc-distributed systems can be compared in terms of robustness. The interconnected systems are prone to impedance-based interactions which might lead to transient-performance degradation or even instability. These systems typically are constructed using commercial converters with unknown internal structure. Therefore, the analysis presented throughout this thesis is based on frequency responses measurable from the input and output terminals. The stability margins are stated utilizing a concept of maximum peak criteria, derived from the behavior of impedance-based sensitivity function that provides a single number to state robust stability. Using this concept, the stability information at every system interface is combined to a meaningful number to state the average robustness of the system. In addition, theoretical formulas are extracted to assess source and load side interactions in order to describe detailed couplings within the system. The presented theoretical analysis methodologies are experimentally validated throughout the thesis. In this thesis, according to the presented analysis, the least conservative stability margins are provided as a single number guaranteeing robustness. It is also shown that within the interconnected system the robust stability is ensured only if the impedance-based minor-loop gain is determined at the very input or output of each subsystem. Moreover, a complete set of impedance-type internal parameters as well as the formulas according to which the interaction sensitivity can be fully explained and analyzed, is provided. The given formulation can be utilized equally either based on measured frequency responses, time-domain identified internal parameters or extracted analytic transfer functions. Based on the analysis methodologies presented in this thesis, the stability and performance of interconnected systems consisting of converters with unknown internal structure, can be predicted. Moreover, the provided concept to assess the least conservative stability margins enables to obtain an index to state the overall robust stability of distributed power architecture and thus to compare different systems in terms of stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente trabajo se basa en la filosofía de la Construcción sin Pérdidas (“Lean Construction”), analizando la situación de esta filosofía en el sector de la edificación en el contexto internacional y español, respondiendo las siguientes preguntas: 1. ¿Cómo surge el “Lean Construction”? 2. ¿Cuáles son sus actividades, funciones y cometidos? 3. ¿Existe regulación del ¨Lean Construction” en otros países? 4. ¿Existe demanda del ¨Lean Construction” en España? 5. ¿Existe regulación del ¨Lean Construction” en España? 6. ¿Cómo debería ser la regulación ¨Lean Construction” en España? 7. ¿Cuál es la relación del “Lean Construction” con el “Project & Construction Management”? 8. ¿Cómo debería ser la regulación de “Lean Construction” en España considerando su relación con el “Project & Construction Management”? Las preguntas indicadas las hemos respondido detalladamente en el presente trabajo, a continuación se resume las respuestas a dichas preguntas: 1. El “Lean Construction” surge en agosto de 1992, cuando el investigador finlandés Lauri Koskela publicó en la Universidad de Stanford el reporte TECHNICAL REPORT N° 72 titulado “Application of the New Production Philosophy to Construction”. Un año más tarde el Dr. Koskela invitó a un grupo de especialistas en construcción al primer workshop de esta materia en Finlandia, dando origen al International Group for Lean Construction (IGLC) lo que ha permitido extender la filosofía a EEUU, Europa, América, Asia, Oceanía y África. “Lean Construction” es un sistema basado en el enfoque “Lean Production” desarrollado en Japón por Toyota Motors a partir de los años cincuenta, sistema que permitió a sus fábricas producir unidades con mayor eficiencia que las industrias americanas, con menores recursos, en menor tiempo, y con un número menor de errores de fabricación. 2. El sistema “Lean Construction” busca maximizar el valor y disminuir las pérdidas de los proyectos generando una coordinación eficiente entre los involucrados, manejando un proyecto como un sistema de producción, estrechando la colaboración entre los participantes de los proyectos, capacitándoles y empoderándoles, fomentando una cultura de cambio. Su propósito es desarrollar un proceso de construcción en el que no hayan accidentes, ni daños a equipos, instalaciones, entorno y comunidad, que se realice en conformidad con los requerimientos contractuales, sin defectos, en el plazo requerido, respetando los costes presupuestados y con un claro enfoque en la eliminación o reducción de las pérdidas, es decir, las actividades que no generen beneficios. El “Last Planner System”, o “Sistema del Último Planificador”, es un sistema del “Lean Construction” que por su propia naturaleza protege a la planificación y, por ende, ayuda a maximizar el valor y minimizar las pérdidas, optimizando de manera sustancial los sistemas de seguridad y salud. El “Lean Construction” se inició como un concepto enfocado a la ejecución de las obras, posteriormente se aplicó la filosofía a todas las etapas del proyecto. Actualmente considera el desarrollo total de un proyecto, desde que nace la idea hasta la culminación de la obra y puesta en marcha, considerando el ciclo de vida completo del proyecto. Es una filosofía de gestión, metodologías de trabajo y una cultura empresarial orientada a la eficiencia de los procesos y flujos. La filosofía “Lean Construction” se está expandiendo en todo el mundo, además está creciendo en su alcance, influyendo en la gestión contractual de los proyectos. Su primera evolución consistió en la creación del sistema “Lean Project Delivery System”, que es el concepto global de desarrollo de proyectos. Posteriormente, se proponen el “Target Value Design”, que consiste en diseñar de forma colaborativa para alcanzar los costes y el valor requerido, y el “Integrated Project Delivery”, en relación con sistemas de contratos relacionales (colaborativos) integrados, distintos a los contratos convencionales. 3. Se verificó que no existe regulación específica del ¨Lean Construction” en otros países, en otras palabras, no existe el agente con el nombre específico de “Especialista en Lean Construction” o similar, en consecuencia, es un agente adicional en el proyecto de la edificación, cuyas funciones y cometidos se pueden solapar con los del “Project Manager”, “Construction Manager”, “Contract Manager”, “Safety Manager”, entre otros. Sin embargo, se comprobó la existencia de formatos privados de contratos colaborativos de Integrated Project Delivery, los cuales podrían ser tomados como unas primeras referencias para futuras regulaciones. 4. Se verificó que sí existe demanda del ¨Lean Construction” en el desarrollo del presente trabajo, aunque aún su uso es incipiente, cada día existe más interesados en el tema. 5. No existe regulación del ¨Lean Construction” en España. 6. Uno de los objetivos fundamentales de esta tesis es el de regular esta figura cuando actúe en un proyecto, definir y realizar una estructura de Agente de la Edificación, según la Ley de Ordenación de la Edificación (LOE), y de esta manera poder introducirla dentro de la Legislación Española, protegiéndola de eventuales responsabilidades civiles. En España existe jurisprudencia (sentencias de los tribunales de justicia españoles) con jurisdicción civil basada en la LOE para absolver o condenar a agentes de la edificación que son definidos en los tribunales como “gestores constructivos” o similares. Por este motivo, en un futuro los tribunales podrían dictaminar responsabilidades solidarias entre el especialista “Lean Construction” y otros agentes del proyecto, dependiendo de sus actuaciones, y según se implemente el “Lean Project Delivery System”, el “Target Value Design” y el “Integrated Project Delivery”. Por otro lado, es posible que el nivel de actuación del especialista “Lean Construcción” pueda abarcar la gestión del diseño, la gestión de la ejecución material (construcción), la gestión de contratos, o la gestión integral de todo el proyecto de edificación, esto último, en concordancia con la última Norma ISO 21500:2012 o UNE-ISO 21500:2013 Directrices para la dirección y gestión de proyectos. En consecuencia, se debería incorporar adecuadamente a uno o más agentes de la edificación en la LOE de acuerdo a sus funciones y responsabilidades según los niveles de actuación del “Especialista en Lean Construction”. Se propone la creación de los siguientes agentes: Gestor del Diseño, Gestor Constructivo y Gestor de Contratos, cuyas definiciones están desarrolladas en este trabajo. Estas figuras son definidas de manera general, puesto que cualquier “Project Manager” o “DIPE”, gestor BIM (Building Information Modeling), o similar, puede actuar como uno o varios de ellos. También se propone la creación del agente “Gestor de la Construcción sin Pérdidas”, como aquel agente que asume las actuaciones del “gestor de diseño”, “gestor constructivo” y “gestor de contratos” con un enfoque en los principios del Lean Production. 7. En la tesis se demuestra, por medio del uso de la ISO 21500, que ambos sistemas son complementarios, de manera que los proyectos pueden tener ambos enfoques y ser compatibilizados. Un proyecto que use el “Project & Construction Management” puede perfectamente apoyarse en las herramientas y técnicas del “Lean Construction” para asegurar la eliminación o reducción de las pérdidas, es decir, las actividades que no generen valor, diseñando el sistema de producción, el sistema de diseño o el sistema de contratos. 8. Se debería incorporar adecuadamente al agente de la edificación “Especialista en Lean Construction” o similar y al agente ¨Especialista en Project & Construction Management” o DIPE en la Ley de Ordenación de la Edificación (LOE) de acuerdo a sus funciones y responsabilidades, puesto que la jurisprudencia se ha basado para absolver o condenar en la referida Ley. Uno de los objetivos fundamentales de esta tesis es el de regular la figura del “Especialista en Lean Construction” cuando actúa simultáneamente con el DIPE, y realizar una estructura de Agente de la Edificación según la LOE, y de esta manera protegerlo de eventuales responsabilidades solidarias. Esta investigación comprueba que la propuesta de definición del agente de edificación DIPE, según la LOE, presentada en la tesis doctoral del Doctor Manuel Soler Severino es compatible con las nuevas definiciones propuestas. El agente DIPE puede asumir los roles de los diferentes gestores propuestos en esta tesis si es que se especializa en dichas materias, o, si lo estima pertinente, recomendar sus contrataciones. ABSTRACT This work is based on the Lean Construction philosophy; an analysis is made herein with regard to the situation of this philosophy in the building sector within the international and Spanish context, replying to the following questions: 1. How did the concept of Lean Construction emerge? 2. Which are the activities, functions and objectives of Lean Construction? 3. Are there regulations on Lean Construction in other countries? 4. Is there a demand for Lean Construction in Spain? 5. Are there regulations on Lean Construction in Spain? 6. How should regulations on Lean Construction be developed in Spain? 7. What is the relationship between Lean Construction and the Project & Construction Management? 8. How should regulations on Lean Construction be developed in Spain considering its relationship with the Project & Construction Management? We have answered these questions in detail here and the replies are summarized as follows: 1. The concept of Lean Construction emerged in august of 1992, when Finnish researcher Lauri Koskela published in Stanford University TECHNICAL REPORT N° 72 entitled “Application of the New Production Philosophy to Construction”. A year later, Professor Koskela invited a group of construction specialists to Finland to the first workshop conducted on this matter; thus, the International Group for Lean Construction (IGLC) was established, which has contributed to extending the philosophy to the United States, Europe, the Americas, Asia, Oceania, and Africa. Lean Construction is a system based on the Lean Production approach, which was developed in Japan by Toyota Motors in the 1950s. Thanks to this system, the Toyota plants were able to produce more units, with greater efficiency than the American industry, less resources, in less time, and with fewer manufacturing errors. 2. The Lean Construction system aims at maximizing the value of projects while reducing waste, producing an effective coordination among those involved; it manages projects as a production system, enhancing collaboration between the parties that participate in the projects while building their capacities, empowering them, and promoting a culture of change. Its purpose is to develop a construction process free of accidents, without damages to the equipment, facilities, environment and community, flawless, in accordance with contractual requirements, within the terms established, respecting budgeted costs, and with a clear approach to eliminating or reducing waste, that is, activities that do not generate benefits. The Last Planner System is a Lean Construction system, which by its own nature protects planning and, therefore, helps to maximize the value and minimize waste, optimizing substantially the safety and health systems. Lean Construction started as a concept focused on the execution of works, and subsequently the philosophy was applied to all the stages of the project. At present it considers the project’s total development, since the time ideas are born until the completion and start-up of the work, taking into account the entire life cycle of the project. It is a philosophy of management, work methodologies, and entrepreneurial culture aimed at the effectiveness of processes and flows. The Lean Construction philosophy is extending all over the world and its scope is becoming broader, having greater influence on the contractual management of projects. It evolved initially through the creation of the Lean Project Delivery System, a global project development concept. Later on, the Target Value Design was developed, based on collaborative design to achieve the costs and value required, as well as the Integrated Project Delivery, in connection with integrated relational (collaborative) contract systems, as opposed to conventional contracts. 3. It was verified that no specific regulations on Lean Construction exist in other countries, in other words, there are no agents with the specific name of “Lean Construction Specialist” or other similar names; therefore, it is an additional agent in building projects, which functions and objectives can overlap those of the Project Manager, Construction Manager, Contract Manager, or Safety Manager, among others. However, the existence of private collaborative contracts of Integrated Project Delivery was confirmed, which could be considered as first references for future regulations. 4. There is a demand for Lean Construction in the development of this work; even though it is still emerging, there is a growing interest in this topic. 5. There are no regulations on Lean Construction in Spain. 6. One of the main objectives of this thesis is to regulate this role when acting in a project, and to define and develop a Building Agent structure, according to the Building Standards Law (LOE by its acronym in Spanish), in order to be able to incorporate it into the Spanish law, protecting it from civil liabilities. In Spain there is jurisprudence in civil jurisdiction based on the LOE to acquit or convict building agents, which are defined in the courts as “construction managers” or similar. For this reason, courts could establish in the future joint and several liabilities between the Lean Construction Specialist and other agents of the project, depending on their actions and based on the implementation of the Lean Project Delivery System, the Target Value Design, and the Integrated Project Delivery. On the other hand, it is possible that the level of action of the Lean Construction Specialist may comprise design management, construction management and contract management, or the integral management of the entire building project in accordance with the last ISO 21500:2012 or UNE-ISO 21500:2013, guidelines for the management of projects. Accordingly, one or more building agents should be appropriately incorporated into the LOE according to their functions and responsibilities and based on the levels of action of the Lean Construction Specialist. The creation of the following agents is proposed: Design Manager, Construction Manager, and Contract Manager, which definitions are developed in this work. These agents are defined in general, since any Project Manager or DIPE, Building Information Modeling (BIM) Manager or similar, may act as one or as many of them. The creation of the Lean Construction Manager is also proposed, as the agent that takes on the role of the Design Manager, Construction Manager and Contract Manager with a focus on the Lean Production principles. 7. In the thesis it is demonstrated that through the implementation of the ISO 21500, both systems are supplementary, so projects may have both approaches and be compatible. A project that applies the Project & Construction Management may perfectly have the support of the tools, techniques and practices of Lean Construction to ensure the elimination or reduction of losses, that is, those activities that do not generate value, thus designing the production system, the design system, or the contract system. 8. The Lean Construction Specialist or similar and the Specialist in Project & Construction Management should be incorporated appropriately into the LOE according to their functions and responsibilities, since jurisprudence has been based on such Law to acquit or convict. One of the main objectives of this thesis is the regulate the role of the Lean Construction Specialist when acting simultaneously with the DIPE, and to develop a structure of the building agent, according to the LOE, and in this way protect such agent from joint and several liabilities. This research proves that the proposal to define the DIPE building agent, according to the LOE, and presented in the doctoral dissertation of Manuel Soler Severino, Ph.D. is compatible with the new definitions proposed. The DIPE agent may assume the roles of the different managers proposed in this thesis if he specializes in those topics or, if deemed pertinent, recommends that they be engaged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a completely autonomous solution to participate in the Indoor Challenge of the 2013 International Micro Air Vehicle Competition (IMAV 2013). Our proposal is a multi-robot system with no centralized coordination whose robotic agents share their position estimates. The capability of each agent to navigate avoiding collisions is a consequence of the resulting emergent behavior. Each agent consists of a ground station running an instance of the proposed architecture that communicates over WiFi with an AR Drone 2.0 quadrotor. Visual markers are employed to sense and map obstacles and to improve the pose estimation based on Inertial Measurement Unit (IMU) and ground optical flow data. Based on our architecture, each robotic agent can navigate avoiding obstacles and other members of the multi-robot system. The solution is demonstrated and the achieved navigation performance is evaluated by means of experimental flights. This work also analyzes the capabilities of the presented solution in simulated flights of the IMAV 2013 Indoor Challenge. The performance of the CVG UPM team was awarded with the First Prize in the Indoor Autonomy Challenge of the IMAV 2013 competition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The important role of furin in the proteolytic activation of many pathogenic molecules has made this endoprotease a target for the development of potent and selective antiproteolytic agents. Here, we demonstrate the utility of the protein-based inhibitor α1-antitrypsin Portland (α1-PDX) as an antipathogenic agent that can be used prophylactically to block furin-dependent cell killing by Pseudomonas exotoxin A. Biochemical analysis of the specificity of a bacterially expressed His- and FLAG-tagged α1-PDX (α1-PDX/hf) revealed the selectivity of the α1-PDX/hf reactive site loop for furin (Ki, 600 pM) but not for other proprotein convertase family members or other unrelated endoproteases. Kinetic studies show that α1-PDX/hf inhibits furin by a slow tight-binding mechanism characteristic of serpin molecules and functions as a suicide substrate inhibitor. Once bound to furin’s active site, α1-PDX/hf partitions with equal probability to undergo proteolysis by furin at the C-terminal side of the reactive center -Arg355-Ile-Pro-Arg358-↓ or to form a kinetically trapped SDS-stable complex with the enzyme. This partitioning between the complex-forming and proteolytic pathways contributes to the ability of α1-PDX/hf to differentially inhibit members of the proprotein convertase family. Finally, we propose a structural model of the α1-PDX-reactive site loop that explains the high degree of enzyme selectivity of this serpin and which can be used to generate small molecule furin inhibitors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an approach for evaluating the efficacy of combination antitumor agent schedules that accounts for order and timing of drug administration. Our model-based approach compares in vivo tumor volume data over a time course and offers a quantitative definition for additivity of drug effects, relative to which synergism and antagonism are interpreted. We begin by fitting data from individual mice receiving at most one drug to a differential equation tumor growth/drug effect model and combine individual parameter estimates to obtain population statistics. Using two null hypotheses: (i) combination therapy is consistent with additivity or (ii) combination therapy is equivalent to treating with the more effective single agent alone, we compute predicted tumor growth trajectories and their distribution for combination treated animals. We illustrate this approach by comparing entire observed and expected tumor volume trajectories for a data set in which HER-2/neu-overexpressing MCF-7 human breast cancer xenografts are treated with a humanized, anti-HER-2 monoclonal antibody (rhuMAb HER-2), doxorubicin, or one of five proposed combination therapy schedules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An artificial DNA bending agent has been designed to assess helix flexibility over regions as small as a protein binding site. Bending was obtained by linking a pair of 15-base-long triple helix forming oligonucleotides (TFOs) by an adjustable polymeric linker. By design, DNA bending was introduced into the double helix within a 10-bp spacer region positioned between the two sites of 15-base triple helix formation. The existence of this bend has been confirmed by circular permutation and phase-sensitive electrophoresis, and the directionality of the bend has been determined as a compression of the minor helix groove. The magnitude of the resulting duplex bend was found to be dependent on the length of the polymeric linker in a fashion consistent with a simple geometric model. Data suggested that a 50-70 degrees bend was achieved by binding of the TFO chimera with the shortest linker span (18 rotatable bonds). Equilibrium analysis showed that, relative to a chimera which did not bend the duplex, the stability of the triple helix possessing a 50-70 degrees bend was reduced by less than 1 kcal/mol of that of the unbent complex. Based upon this similarity, it is proposed that duplex DNA may be much more flexible with respect to minor groove compression than previously assumed. It is shown that this unusual flexibility is consistent with recent quantitation of protein-induced minor groove bending.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detection of loss of heterozygosity (LOH) by comparison of normal and tumor genotypes using PCR-based microsatellite loci provides considerable advantages over traditional Southern blotting-based approaches. However, current methodologies are limited by several factors, including the numbers of loci that can be evaluated for LOH in a single experiment, the discrimination of true alleles versus "stutter bands," and the use of radionucleotides in detecting PCR products. Here we describe methods for high throughput simultaneous assessment of LOH at multiple loci in human tumors; these methods rely on the detection of amplified microsatellite loci by fluorescence-based DNA sequencing technology. Data generated by this approach are processed by several computer software programs that enable the automated linear quantitation and calculation of allelic ratios, allowing rapid ascertainment of LOH. As a test of this approach, genotypes at a series of loci on chromosome 4 were determined for 58 carcinomas of the uterine cervix. The results underscore the efficacy, sensitivity, and remarkable reproducibility of this approach to LOH detection and provide subchromosomal localization of two regions of chromosome 4 commonly altered in cervical tumors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leukotriene A4 (LTA4) hydrolase [(7E,9E,11Z,14Z)-(5S,6S)-5,6-epoxyicosa-7, 9,11,14-tetraenoate hydrolase; EC 3.3.2.6] is a bifunctional zinc metalloenzyme that catalyzes the final step in the biosynthesis of the potent chemotactic agent leukotriene B4 (LTB4). LTA4 hydrolase/aminopeptidase is suicide inactivated during catalysis via an apparently mechanism-based irreversible binding of LTA4 to the protein in a 1:1 stoichiometry. Previously, we have identified a henicosapeptide, encompassing residues Leu-365 to Lys-385 in human LTA4 hydrolase, which contains a site involved in the covalent binding of LTA4 to the native enzyme. To investigate the role of Tyr-378, a potential candidate for this binding site, we exchanged Tyr for Phe or Gln in two separate mutants. In addition, each of two adjacent and potentially reactive residues, Ser-379 and Ser-380, were exchanged for Ala. The mutated enzymes were expressed as (His)6-tagged fusion proteins in Escherichia coli, purified to apparent homogeneity, and characterized. Enzyme activity determinations and differential peptide mapping, before and after repeated exposure to LTA4, revealed that wild-type enzyme and the mutants [S379A] and [S380A]LTA4hydrolase were equally susceptible to suicide inactivation whereas the mutants in position 378 were no longer inactivated or covalently modified by LTA4. Furthermore, in [Y378F]LTA4 hydrolase, the value of kcat for epoxide hydrolysis was increased 2.5-fold over that of the wild-type enzyme. Thus, by a single-point mutation in LTA4 hydrolase, catalysis and covalent modification/inactivation have been dissociated, yielding an enzyme with increased turnover and resistance to mechanism-based inactivation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe here a simple and easily manipulatable Escherichia coli-based genetic system that permits us to identify bacterial gene products that modulate the sensitivity of bacteria to tumoricidal agents, such as DMP 840, a bisnaphthalimide drug. To the extent that the action of these agents is conserved, these studies may expand our understanding agents is conserved, these studies may expand our understanding of how the agents work in mammalian cells. The approach briefly is to use a library of E. coli genes that are overexpressed in a high copy number vector to select bacterial clones that are resistant to the cytotoxic effects of drugs. AtolC bacterial mutant is used to maximize permeability of cells to hydrophobic organic molecules. By using DMP 840 to model the system, we have identified two genes, designated mdaA and mdaB, that impart resistance to DMP 840 when they are expressed at elevated levels. mdaB maps to E. coli map coordinate 66, is located between the parE and parC genes, and encodes a protein of 22 kDa. mdaA maps to E. coli map coordinate 18, is located adjacent to the glutaredoxin (grx) gene, and encodes a protein of 24 kDa. Specific and regulatable overproduction of both of these proteins correlates with DMP 840 resistance. Overproduction of the MdaB protein also imparts resistance to two mammalian topoisomerase inhibitors, Adriamycin and etoposide. In contrast, overproduction of the MdaA protein produces resistance only to Adriamycin. Based on its drug-resistance properties and its location between genes that encode the two subunits of the bacterial topoisomerase IV, we suggest that mdaB acts by modulating topoisomerase IV activity. The location of the mdaA gene adjacent to grx suggests it acts by a drug detoxification mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leukotriene A4 (LTA4) hydrolase [7E,9E,11Z,14Z)-(5S,6S)-5,6-epoxyicosa-7,9 ,11,14-tetraenoate hydrolase; EC 3.3.2.6] is a bifunctional zinc metalloenzyme which converts LTA4 into the chemotactic agent leukotriene B4 (LTB4). Suicide inactivation, a typical feature of LTA4 hydrolase/aminopeptidase, occurs via an irreversible, apparently mechanism-based, covalent binding of LTA4 to the protein in a 1:1 stoichiometry. Differential lysine-specific peptide mapping of unmodified and suicide-inactivated LTA4 hydrolase has been used to identify a henicosapeptide, encompassing the amino acid residues 365-385 of human LTA4 hydrolase, which is involved in the binding of LTA4, LTA4 methyl ester, and LTA4 ethyl ester to the native enzyme. A modified form of this peptide, generated by lysine-specific digestion of LTA4 hydrolase inactivated by LTA4 ethyl ester, could be isolated for complete Edman degradation. The sequence analysis revealed a gap at position 14, which shows that binding of the leukotriene epoxide had occurred via Tyr-378 in LTA4 hydrolase. Inactivation of the epoxide hydrolase and the aminopeptidase activity was accompanied by a proportionate modification of the peptide. Furthermore, both enzyme inactivation and peptide modification could be prevented by preincubation of LTA4 hydrolase with the competitive inhibitor bestatin, which demonstrates that the henicosapeptide contains functional elements of the active site(s). It may now be possible to clarify the molecular mechanisms underlying suicide inactivation and epoxide hydrolysis by site-directed mutagenesis combined with structural analysis of the lipid molecule, covalently bound to the peptide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No Brasil, a Política Nacional de Recursos Hídricos apresenta a cobrança pelo uso da água como um instrumento de gestão de recursos hídricos de caráter econômico. Considerando esse caráter, a cobrança deve ter como objetivos: racionalizar o uso do recurso baseado na sua escassez; reconhecer a água como um bem de valor econômico, refletindo os custos ambientais advindos de sua utilização; e diminuir os conflitos entre os usos, induzindo uma alocação que considere o gerenciamento da demanda e as prioridades da sociedade. Além dessas metas, como instrumento de gestão de uma política que lista como primeiro objetivo \"assegurar à atual e às futuras gerações a necessária disponibilidade de água, em padrões de qualidade adequados aos respectivos usos\", a cobrança deve ser implementada de maneira que o agente usuário direcione seu comportamento no sentido da sustentabilidade ambiental. Mediante esses fundamentos, o que se pretende desenvolver neste trabalho é a aplicação de um modelo de cobrança sobre o uso da água que considera como princípio base a manutenção da qualidade ambiental medida pela adequada gestão da escassez de água e, compondo a busca dessa qualidade, a racionalização econômica e a viabilização financeira. Essa predominância do ambiente sobre aspectos econômicos vem no sentido de desqualificar argumentos segundo os quais, os impactos advindos dos usos da água serão corrigidos indefinidamente mediante investimentos financeiros em infra estrutura. Admitir que o desenvolvimento tem esse poder é supor equivocadamente que o meio econômico é limitante do meio ambiente e não o contrário. Esta constatação mostra qual é o problema da maioria das propostas de cobrança que valoram a água baseadas em custos de tratamento de resíduos e de obras hidráulicas. Por mais elaborados que sejam essas fórmulas de cobrança, chegando a ponto de se conseguir que fique mais caro, mediante um padrão ambiental corretamente definido, captar água ou lançar poluentes do que racionalizar usos, o preço da água não pode estar baseado em fatores cuja \"sustentabilidade\" pode acabar no curto prazo, dependendo do ritmo de crescimento econômico. A sustentabilidade dos recursos hídricos só será base da cobrança pelo uso da água se o valor cobrado for dificultando esse uso à medida que os recursos tornarem se escassos, e não quando os custos de medidas mitigadoras dessa escassez se tornarem muito elevados. Portanto, o modelo de cobrança proposto neste trabalho procura garantir que o agente econômico que está exaurindo o meio ambiente não possa ter capacidade de pagar por essa degradação, ajudando efetivamente a política de outorga do direito de uso da água na observância da capacidade de suporte do meio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta pesquisa investiga o contexto social do desenvolvimento da produção científica contábil brasileira, defendendo a tese de que os agentes, no decorrer do processo de divulgação de suas investigações, estão priorizando aspectos produtivistas e quantitativos e, consequentemente, deixando em segundo plano a preocupação qualitativa e epistemológica [vigilância crítica] de tal produção. Fundamentado na Teoria de Campos de Pierre Bourdieu, este estudo busca relacionar a socialização acadêmica, o habitus dos agentes imbricados no campo, a distribuição do capital científico na área contábil e as características epistemológicas das publicações científicas da área, para obtenção das evidências sobre a problemática levantada. Trata-se de um levantamento operacionalizado por meio de entrevista semiestruturada, com uma amostra de 9 respondentes e estudo documental, com uma amostra de 43 artigos. Os dados foram analisados com emprego da técnica de análise de conteúdo. Apoiando-se em Bourdieu (2004, 2008, 2009, 2011, 2013) foram encontradas evidências de que as teorias, conceitos, metodologias, técnicas e demais escolhas realizadas pelos pesquisadores da área contábil, na maioria das vezes, não passam de manobras estratégicas que visam conquistar, reforçar, assegurar ou derrubar o monopólio da autoridade científica, visando a obtenção de maior poder simbólico no campo. Com relação ao habitus dos agentes pertencentes ao campo científico contábil, constatou-se uma tendência ao produtivismo em consequência das determinações dos órgãos reguladores da pesquisa em contabilidade (CAPES) e das lutas simbólicas travadas no campo para obtenção da autoridade científica. No tocante à socialização acadêmica, reforçou-se a presença de condutas produtivistas, por meio dos programas de pós-graduação stricto sensu, que repassam aos agentes as regras do jogo científico, doutrinando-os na maneira de publicar grande quantidade de comunicações em pouco tempo e com menos custos. As análises epistemológicas puderam triangular os dois últimos constructos, a fim de lhes dar validade, e evidenciaram uma preferência por temáticas que envolvem a contabilidade destinada aos usuários externos e procedimentos contábeis destinados ao mercado financeiro, privilegiando a utilização de dados secundários, por meio de pesquisas documentais. Em termos metodológicos, constatou-se a presença unânime de estudos positivistas, com alguns aspectos empiristas, mostrando uma ausência de inovação em termos de pesquisas norteadas por abordagens metodológicas alternativas e utilização de modelos econométricos para explicar a realidade observada sem teoria para embasar e explicar esses modelos. Por fim, a distribuição do capital simbólico no campo, mostrou que individualmente nenhum agente desponta com maior capital científico, mas, institucionalmente, a FEA/USP ocupa essa posição de destaque. Por conseguinte, pôde-se concluir que o campo científico contábil permanece estagnado e sem grandes modificações teóricas, pelo fato do produtivismo e das lutas simbólicas no interior do campo; fatos esses que, de certa maneira, motivaram a criação de uma espécie de \"receita mágica para publicar\" ou \"formato ideal\" legitimado, institucionalizado e difícil de ser modificado, a não ser que ocorra uma revolução científica que mude o paradigma existente

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tagging of RNases, such as the ribotoxin α-sarcin, with the variable domains of antibodies directed to surface antigens that are selectively expressed on tumor cells endows cellular specificity to their cytotoxic action. A recombinant single-chain immunotoxin based on the ribotoxin α-sarcin (IMTXA33αS), produced in the generally regarded as safe (GRAS) yeast Pichia pastoris, has been recently described as a promising candidate for the treatment of colorectal cancer cells expressing the glycoprotein A33 (GPA33) antigen, due to its high specific and effective cytotoxic effect on in vitro assays against targeted cells. Here we report the in vivo antitumor effectiveness of this immunotoxin on nude mice bearing GPA33-positive human colon cancer xenografts. Two sets of independent assays were performed, including three experimental groups: control (PBS) and treatment with two different doses of immunotoxin (50 or 100 μg/ injection) (n = 8). Intraperitoneal administration of IMTXA33αS resulted in significant dose-dependent tumor growth inhibition. In addition, the remaining tumors excised from immunotoxin-treated mice showed absence of the GPA33 antigen and a clear inhibition of angiogenesis and proliferative capacity. No signs of immunotoxin-induced pathological changes were observed from specimens tissues.Overall these results show efficient and selective cytotoxic action on tumor xenografts, combined with the lack of severe side effects, suggesting that IMTXA33αS is a potential therapeutic agent against colorectal cancer.