250 resultados para Scratch


Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Chronic itch with secondary scratch lesions such as prurigo has a major impact on quality of life. Due to its relapsing nature and often unknown origin, its treatment is challenging. OBJECTIVE We sought to demonstrate that alitretinoin can be an efficacious and well-tolerated treatment in a patient suffering from chronic itch with concomitant prurigo and psoriatic lesions. METHODS Case report. RESULTS After 1 month of alitretinoin treatment (30 mg daily), itch as well as prurigo and psoriasis lesions decreased markedly. Three cycles of alitretinoin were administered, as each cessation of treatment led to relapse of the symptoms after 6-8 weeks. Tapering of the alitretinoin dose (30 mg every second day) after the third cycle allowed to maintain the effects for over 18 months. CONCLUSION Treatment of refractory prurigo with alitretinoin might be an efficacious alternative to standard therapies. In case of relapse, retreatment with alitretinoin reinduces a further long-lasting response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acute hemorrhagic edema of young children is a rare leukocytoclastic vasculitis that has been reported exclusively in small retrospective cases series, case reports, or quizzes. Considering that retrospective experience deserves confirmation in at least one observational prospective study, we present our experience with 16 children (12 boys and 4 girls, 5-28 months of age) affected by acute hemorrhagic edema. The patients were in good general conditions and with a low-grade or even absent fever. They presented with non-itching red to purpuric targetoid lesions not changing location within hours, with non-pitting and sometimes tender indurative swelling, and without mucous membrane involvement or scratch marks. Signs for articular, abdominal, or kidney involvement were absent. Antinuclear or antineutrophil cytoplasmic autoantibodies were never detected. The cases were managed symptomatically as outpatients and fully resolved within 4 weeks or less. No recurrence or familiarity was noted. CONCLUSION This is the first prospective evaluation of hemorrhagic edema. Our findings emphasize its distinctive tetrad: a well-appearing child; targetoid lesions that do not change location within hours; non-pitting, sometimes tender edema; complete resolution without recurrence. What is known • Acute hemorrhagic edema of young children is considered a benign vasculitis. • There have been ≈100 cases reported in small retrospective case series. What is new • The first prospective evaluation of this condition emphasizes its features: febrile prodrome; well-appearing child; targetoid lesions not changing location within hours; non-pitting, sometimes tender indurative edema; absent extracutaneous involvement; resolution within 3 weeks. • Antineutrophil cytoplasmic autoantibodies do not play a pathogenic role.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent data have shown that the percentage of time spent preparing food has decreased during the past few years, and little information is know about how much time people spend grocery shopping. Food that is pre-prepared is often higher in calories and fat compared to foods prepared at home from scratch. It has been suggested that, because of the higher energy and total fat levels, increased consumption of pre-prepared foods compared to home-cooked meals can lead to weight gain, which in turn can lead to obesity. Nevertheless, to date no study has examined this relationship. The purpose of this study is to determine (i) the association between adult body mass index (BMI) and the time spent preparing meals, and (ii) the association between adult BMI and time spent shopping for food. Data on food habits and body size were collected with a self-report survey of ethnically diverse adults between the ages of 17 and 70 at a large university. The survey was used to recruit people to participate in nutrition or appetite studies. Among other data, the survey collected demographic data (gender, race/ethnicity), minutes per week spent in preparing meals and minutes per week spent grocery shopping. Height and weight were self-reported and used to calculate BMI. The study population consisted of 689 subjects, of which 276 were male and 413 were female. The mean age was 23.5 years, with a median age of 21 years. The fraction of subjects with BMI less than 24.9 was 65%, between 25 and 29.9 was 26%, and 30 or greater was 9%. Analysis of variation was used to examine associations between food preparation time and BMI. ^ The results of the study showed that there were no significant statistical association between adult healthy weight, overweight and obesity with either food preparation time and grocery shopping time. Of those in the sample who reported preparing food, the mean food preparation time per week for the healthy weight, overweight, and obese groups were 12.8 minutes, 12.3 minutes, and 11.6 minutes respectively. Similarly, the mean weekly grocery shopping for healthy, overweight, and obese groups were 60.3 minutes per week (8.6min./day), 61.4 minutes (8.8min./day), and 57.3 minutes (8.2min./day), respectively. Since this study was conducted through a University campus, it is assumed that most of the sample was students, and a percentage might have been utilizing meal plans on campus, and thus, would have reported little meal preparation or grocery shopping time. Further research should examine the relationships between meal preparation time and time spent shopping for food in a sample that is more representative of the general public. In addition, most people spent very little time preparing food, and thus, health promotion programs for this population need to focus on strategies for preparing quick meals or eating in restaurants/cafeterias. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global analyzers traditionally read and analyze the entire program at once, in a nonincremental way. However, there are many situations which are not well suited to this simple model and which instead require reanalysis of certain parts of a program which has already been analyzed. In these cases, it appears inecient to perform the analysis of the program again from scratch, as needs to be done with current systems. We describe how the xed-point algorithms used in current generic analysis engines for (constraint) logic programming languages can be extended to support incremental analysis. The possible changes to a program are classied into three types: addition, deletion, and arbitrary change. For each one of these, we provide one or more algorithms for identifying the parts of the analysis that must be recomputed and for performing the actual recomputation. The potential benets and drawbacks of these algorithms are discussed. Finally, we present some experimental results obtained with an implementation of the algorithms in the PLAI generic abstract interpretation framework. The results show signicant benets when using the proposed incremental analysis algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the year 1999 approves the Law of Construction Building (LOE, in Spanish) to regulate a sector such as construction, which contained some shortcomings from the legal point of view. Currently, the LOE has been in force 12 years, changing the spanish world of the construction, due to influenced by internationalization. Within the LOE, there regulating the different actors involved in the construction building, as the Projects design, the Director of Construction, the developer, The builder, Director of execution of the construction (actor only in Spain, similar as construcion engineer and abroad in), control entities and the users, but lacks figure Project manager will assume the delegation of the promoter helping and you organize, direct and management the process. This figure assumes that the market and contracts are not legally regulated in Spain, then should define and establish its regulation in the LOE. (Spain Construction Law) The translation in spanish of the words "Project Manager is owed to Professor Rafael de Heredia in his book Integrated Project Management, as agent acting on behalf of the organization and promoter assuming control of the project, ie Integraded Project Management . Already exist in Spain, AEDIP (Spanish Association Integrated of Project Construction management) which comprises the major companies in “Project Management” in Spain, and MeDIP (Master in Integrated Construction Project) the largest and most advanced studies at the Polytechnic University of Madrid, in "Construction Project Management" they teach which is also in Argentina. The Integrated Project ("Project Management") applied to the construction process is a methodological technique that helps to organize, control and manage the resources of the promoters in the building process. When resources are limited (which is usually most situations) to manage them efficiently becomes very important. Well, we find that in this situation, the resources are not only limited, but it is limited, so a comprehensive control and monitoring of them becomes not only important if not crucial. The alternative of starting from scratch with a team that specializes in developing these follow directly intervening to ensure that scarce resources are used in the best possible way requires the use of a specific methodology (Manual DIP, Matrix Foreign EDR breakdown structure EDP Project, Risk Management and Control, Design Management, et ..), that is the methodology used by "Projects managers" to ensure that the initial objectives of the promoters or investors are met and all actors in process, from design to construction company have the mind aim of the project will do, trying to get their interests do not prevail over the interests of the project. Among the agents listed in the building process, "Project Management" or DIPE (Director Comprehensive building process, a proposed name for possible incorporation into the LOE, ) currently not listed as such in the LOE (Act on Construction Planning ), one of the agents that exist within the building process is not regulated from the legal point of view, no obligations, ie, as is required by law to have a project, a builder, a construction management, etc. DIPE only one who wants to hire you as have been advanced knowledge of their services by the clients they have been hiring these agents, there being no legal obligation as mentioned above, then the market is dictating its ruling on this new figure, as if it were necessary, he was not hired and eventually disappeared from the building process. As the aim of this article is regular the process and implement the name of DIPE in the Spanish Law of buildings construction (LOE)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global analyzers traditionally read and analyze the entire program at once, in a non-incremental way. However, there are many situations which are not well suited to this simple model and which instead require reanalysis of certain parts of a program which has already been analyzed. In these cases, it appears inefficient to perform the analysis of the program again from scratch, as needs to be done with current systems. We describe how the fixpoint algorithms in current generic analysis engines can be extended to support incremental analysis. The possible changes to a program are classified into three types: addition, deletion, and arbitrary change. For each one of these, we provide one or more algorithms for identifying the parts of the analysis that must be recomputed and for performing the actual recomputation. The potential benefits and drawbacks of these algorithms are discussed. Finally, we present some experimental results obtained with an implementation of the algorithms in the PLAI generic abstract interpretation framework. The results show significant benefits when using the proposed incremental analysis algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En numerosas ocasiones a lo largo de la historia la imaginación de los creadores ha ido por delante de las posibilidades técnicas de cada momento. Así, muchas de estas nuevas ideas han requerido largos periodos de tiempo para materializarse como realidad construida, hasta que el desarrollo tecnológico e industrial hubo alcanzado un grado de madurez suficiente. En el campo de la arquitectura, estas limitaciones técnicas se han ido acotando paulatinamente hasta desembocar en la situación actual en la que cualquier planteamiento formal puede ser representado gráficamente y analizado desde un punto de vista estructural, superádose de este modo la barrera existente históricamente en el tratamiento de las formas. A lo largo del presente tesis doctoral se analiza cómo la formulación del Método de los Elementos Finitos en la década de los cincuenta y las curvas de Bézier en la década de los sesenta del siglo pasado y la posterior generalización de los ordenadores personales y de los programas informáticos asociados (C.A.D. y F.E.M. principalmente) en los estudios de arquitectura e ingeniería a partir de la década de los noventa, posibilitó el desarrollo de cualquier propuesta arquitectónica, por compleja que ésta fuese, provocando una verdadera revolución a nivel formal en el mundo de la arquitectura, especialmente en el campo de la edificación singular o icónica. Se estudia este proceso a través de ocho edificios; cuatro anteriores y otros tantos posteriores a la desaparición de la barrera anteriormente referida, establecida de forma simbólica en la década de los años ochenta del siglo XX: Frontón de Recoletos en Madrid, Edificio Seagram en Nueva York, Habitat ’67 en Montreal, Ópera de Sídney, museo Guggenheim de Bilbao, ampliación del Victoria & Albert Museum en Londres, tanatorio “Meiso no Mori” en Gifu y nueva sede de la CCTV en Pekín. De entre ellos, la Ópera de Sídney, obra del arquitecto danés Jørn Utzon, condensa gran parte de los aspectos relevantes investigados en relación a la influencia que los métodos de representación y análisis estructural ejercen en la concepción y construcción de las obras de arquitectura. Por este motivo y por considerarse un hito de la arquitectura a nivel global se toma como caso de estudio. La idea general del edificio, que data de 1956, se enmarca en una época inmediatamente anterior a la del desarrollo científico y tecnológico anteriormente referido. Esta ausencia de herramientas de diseño disponibles acordes a la complejidad formal de la propuesta planteada condicionó enormente la marcha del proyecto, dilatándose dramáticamente en el tiempo y disparándose su coste hasta el punto de que el propio arquitecto danés fue separado de las obras antes de su conclusión. Además, la solución estructural finalmente construida de las cubiertas dista mucho de la prevista por Utzon inicialmente. Donde él había imaginado unas finas láminas de hormigón flotando sobre el paisaje se materializó una estructura más pesada, formada por costillas pretensadas de hormigón con unas secciones notablemente mayores. La forma también debió ser modificada de modo ostensible respecto a la propuesta inicial. Si este edificio se pretendiese construir en la actualidad, con toda seguridad el curso de los acontecimientos se desarrollaría por senderos muy diferentes. Ante este supuesto, se plantean las siguientes cuestiones: ¿sería posible realizar un análisis estructural de la cubierta laminar planteada por Utzon inicialmente en el concurso con las herramientas disponibles en la actualidad?; ¿sería dicha propuesta viable estructuralmente?. A lo largo de las siguientes páginas se pretende dar respuesta a estas cuestiones, poniendo de relieve el impacto que los ordenadores personales y los programas informáticos asociados han tenido en la manera de concebir y construir edificios. También se han analizado variantes a la solución laminar planteada en la fase de concurso, a través de las cuales, tratando en la medida de lo posible de ajustarse a las sugerencias que Ove Arup y su equipo realizaron a Jørn Utzon a lo largo del dilatado proceso de proyecto, mejorar el comportamiento general de la estructura del edificio. Por último, se ha pretendido partir de cero y plantear, desde una perspectiva contemporánea, posibles enfoques metodológicos aplicables a la búsqueda de soluciones estructurales compatibles con la forma propuesta originalmente por Utzon para las cubiertas de la Ópera de Sídney y que nunca llegó a ser construida (ni analizada), considerando para ello los medios tecnológicos, científicos e industriales disponibles en la actualidad. Abstract On numerous occasions throughout history the imagination of creators has gone well beyond of the technical possibilities of their time. Many new ideas have required a long period to materialize, until the technological and industrial development had time to catch up. In the architecture field, these technical limitations have gradually tightened leading to the current situation in which any formal approach can be represented and analyzed from a structural point of view, thus concluding that the structural analysis and the graphical representation’s barrier in the development of architectural projects has dissappeared. Throughout the following pages it is examined how the development of the Finite Element Method in the fifties and the Bezier curves in the sixties of the last century and the subsequent spread of personal computers and specialized software in the architectural and engineering offices from the nineties, enabled the development of any architectural proposal independently of its complexity. This has caused a revolution at a formal level in architecture, especially in the field of iconic building. This process is analyzed through eight buildings, four of them before and another four after the disappearance of the above mentioned barrier, roughly established in the eighties of the last century: Fronton Recoletos in Madrid, Seagram Building in New York Habitat '67 in Montreal, Sydney Opera House, Guggenheim Museum Bilbao, Victoria & Albert Museum extension in London, Crematorium “Meiso no Mori” in Gifu and the new CCTV headquarters in Beijing. Among them, the Sydney Opera House, designed by Danish architect Jørn Utzon, condenses many of the main aspects previously investigated regarding the impact of representation methods and structural analysis on the design and construction of architectural projects. For this reason and also because it is considered a global architecture milestone, it is selected as a case study. The building’s general idea, which dates from 1956, is framed at a time immediately preceding the above mentioned scientific and technological development. This lack of available design tools in accordance with the proposal’s formal complexity conditioned enormously the project’s progress, leading to a dramatic delay and multiplying the final budget disproportionately to the point that the Danish architect himself was separated from the works before completion. Furthermore, the built structure differs dramatically from the architect’s initial vision. Where Utzon saw a thin concrete shell floating over the landscape a heavier structure was built, consisting of prestressed concrete ribs with a significantly greater size. The geometry also had to be modified. If this building were to built today, the course of events surely would walk very different paths. Given this assumption, a number of questions could then be formulated: Would it be possible to perform a structural analysis of Utzon’s initially proposed competition-free-ways roof’s geometry with the tools available nowadays?; Would this proposal be structurally feasable?. Throughout the following pages it is intended to clarify this issues, highlighting personal computers and associated software’s impact in building design and construction procedures, especially in the field of iconic building. Variants have also been analyzed for the laminar solution proposed in the competition phase, through which, trying as far as possible to comply with the suggestions that Ove Arup and his team did to Jørn Utzon along the lengthy process project, improving the overall performance of the building structure. Finally, we have started from scratch and analyzed, from a contemporary perspective, possible structural solutions compatible with Utzon’s Opera House’s original geometry and vision –proposal that was never built (nor even analyzed)-, taking into consideration the technological, scientific and industrial means currently available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hoy en día, el desarrollo tecnológico en el campo de los sistemas inteligentes de transporte (ITS por sus siglas en inglés) ha permitido dotar a los vehículos con diversos sistemas de ayuda a la conducción (ADAS, del inglés advanced driver assistance system), mejorando la experiencia y seguridad de los pasajeros, en especial del conductor. La mayor parte de estos sistemas están pensados para advertir al conductor sobre ciertas situaciones de riesgo, como la salida involuntaria del carril o la proximidad de obstáculos en el camino. No obstante, también podemos encontrar sistemas que van un paso más allá y son capaces de cooperar con el conductor en el control del vehículo o incluso relegarlos de algunas tareas tediosas. Es en este último grupo donde se encuentran los sistemas de control electrónico de estabilidad (ESP - Electronic Stability Program), el antibloqueo de frenos (ABS - Anti-lock Braking System), el control de crucero (CC - Cruise Control) y los más recientes sistemas de aparcamiento asistido. Continuando con esta línea de desarrollo, el paso siguiente consiste en la supresión del conductor humano, desarrollando sistemas que sean capaces de conducir un vehículo de forma autónoma y con un rendimiento superior al del conductor. En este trabajo se presenta, en primer lugar, una arquitectura de control para la automatización de vehículos. Esta se compone de distintos componentes de hardware y software, agrupados de acuerdo a su función principal. El diseño de la arquitectura parte del trabajo previo desarrollado por el Programa AUTOPIA, aunque introduce notables aportaciones en cuanto a la eficiencia, robustez y escalabilidad del sistema. Ahondando un poco más en detalle, debemos resaltar el desarrollo de un algoritmo de localización basado en enjambres de partículas. Este está planteado como un método de filtrado y fusión de la información obtenida a partir de los distintos sensores embarcados en el vehículo, entre los que encontramos un receptor GPS (Global Positioning System), unidades de medición inercial (IMU – Inertial Measurement Unit) e información tomada directamente de los sensores embarcados por el fabricante, como la velocidad de las ruedas y posición del volante. Gracias a este método se ha conseguido resolver el problema de la localización, indispensable para el desarrollo de sistemas de conducción autónoma. Continuando con el trabajo de investigación, se ha estudiado la viabilidad de la aplicación de técnicas de aprendizaje y adaptación al diseño de controladores para el vehículo. Como punto de partida se emplea el método de Q-learning para la generación de un controlador borroso lateral sin ningún tipo de conocimiento previo. Posteriormente se presenta un método de ajuste on-line para la adaptación del control longitudinal ante perturbaciones impredecibles del entorno, como lo son los cambios en la inclinación del camino, fricción de las ruedas o peso de los ocupantes. Para finalizar, se presentan los resultados obtenidos durante un experimento de conducción autónoma en carreteras reales, el cual se llevó a cabo en el mes de Junio de 2012 desde la población de San Lorenzo de El Escorial hasta las instalaciones del Centro de Automática y Robótica (CAR) en Arganda del Rey. El principal objetivo tras esta demostración fue validar el funcionamiento, robustez y capacidad de la arquitectura propuesta para afrontar el problema de la conducción autónoma, bajo condiciones mucho más reales a las que se pueden alcanzar en las instalaciones de prueba. ABSTRACT Nowadays, the technological advances in the Intelligent Transportation Systems (ITS) field have led the development of several driving assistance systems (ADAS). These solutions are designed to improve the experience and security of all the passengers, especially the driver. For most of these systems, the main goal is to warn drivers about unexpected circumstances leading to risk situations such as involuntary lane departure or proximity to other vehicles. However, other ADAS go a step further, being able to cooperate with the driver in the control of the vehicle, or even overriding it on some tasks. Examples of this kind of systems are the anti-lock braking system (ABS), cruise control (CC) and the recently commercialised assisted parking systems. Within this research line, the next step is the development of systems able to replace the human drivers, improving the control and therefore, the safety and reliability of the vehicles. First of all, this dissertation presents a control architecture design for autonomous driving. It is made up of several hardware and software components, grouped according to their main function. The design of this architecture is based on the previous works carried out by the AUTOPIA Program, although notable improvements have been made regarding the efficiency, robustness and scalability of the system. It is also remarkable the work made on the development of a location algorithm for vehicles. The proposal is based on the emulation of the behaviour of biological swarms and its performance is similar to the well-known particle filters. The developed method combines information obtained from different sensors, including GPS, inertial measurement unit (IMU), and data from the original vehicle’s sensors on-board. Through this filtering algorithm the localization problem is properly managed, which is critical for the development of autonomous driving systems. The work deals also with the fuzzy control tuning system, a very time consuming task when done manually. An analysis of learning and adaptation techniques for the development of different controllers has been made. First, the Q-learning –a reinforcement learning method– has been applied to the generation of a lateral fuzzy controller from scratch. Subsequently, the development of an adaptation method for longitudinal control is presented. With this proposal, a final cruise control controller is able to deal with unpredictable environment disturbances, such as road slope, wheel’s friction or even occupants’ weight. As a testbed for the system, an autonomous driving experiment on real roads is presented. This experiment was carried out on June 2012, driving from San Lorenzo de El Escorial up to the Center for Automation and Robotics (CAR) facilities in Arganda del Rey. The main goal of the demonstration was validating the performance, robustness and viability of the proposed architecture to deal with the problem of autonomous driving under more demanding conditions than those achieved on closed test tracks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traditional power grid is just a one-way supplier that gets no feedback data about the energy delivered, what tariffs could be the most suitable ones for customers, the shifting daily needs of electricity in a facility, etc. Therefore, it is only natural that efforts are being invested in improving power grid behavior and turning it into a Smart Grid. However, to this end, several components have to be either upgraded or created from scratch. Among the new components required, middleware appears as a critical one, for it will abstract all the diversity of the used devices for power transmission (smart meters, embedded systems, etc.) and will provide the application layer with a homogeneous interface involving power production and consumption management data that were not able to be provided before. Additionally, middleware is expected to guarantee that updates to the current metering infrastructure (changes in service or hardware availability) or any added legacy measuring appliance will get acknowledged for any future request. Finally, semantic features are of major importance to tackle scalability and interoperability issues. A survey on the most prominent middleware architectures for Smart Grids is presented in this paper, along with an evaluation of their features and their strong points and weaknesses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En la actualidad se está viviendo el auge del Cloud Computing (Computación en la Nube) y cada vez son más las empresas importantes en el sector de las Tecnologías de la Información que apuestan con fuerza por estos servicios. Por un lado, algunas ofrecen servicios, como Amazon y su sistema IaaS (Infrastructure as a Service) Amazon Web Services (AWS); por otro, algunas los utilizan, como ocurre en el caso de este proyecto, en el que Telefonica I+D hace uso de los servicios proporcionados por AWS para sus proyectos. Debido a este crecimiento en el uso de las aplicaciones distribuidas es importante tener en cuenta el papel que desempeñan los desarrolladores y administradores de sistemas que han de trabajar y mantener todas las máquinas remotas de uno o varios proyectos desde una única máquina local. El ayudar a realizar estas tareas de la forma más cómoda y automática posible es el objetivo principal de este proyecto. En concreto, el objetivo de este proyecto es el diseño y la implementación de una solución software que ayude a la productividad en el desarrollo y despliegue de aplicaciones en un conjunto de máquinas remotas desde una única máquina local, teniendo como base una prueba de concepto realizada anteriormente que prueba las funcionalidades más básicas de las librerías utilizadas para el desarrollo de la herramienta. A lo largo de este proyecto se han estudiado las diferentes alternativas que se encuentran en el mercado que ofrecen al menos parte de la soluci6n a los problemas abordados, pese a que los requisitos de la empresa indicaban que la herramienta debía implementarse de forma completa. Se estudió a fondo después la prueba de concepto de la que se partía para, con los conocimientos adquiridos sobre el tema, mejorarla cumpliendo los objetivos marcados. Tras el desarrollo y la implementaci6n completa de la herramienta se proponen posibles caminos a seguir en el futuro. ---ABSTRACT---Nowadays we are experiencing the rise of Cloud Computing and every day more and more important IT companies are betting hard for this kind of services. On one hand, some of these companies offer services such as Amazon IaaS (Infrastructure as a Service) system Amazon Web Services (AWS); on the other hand, some of them use these services, as in the case of this project, in which Telefonica I+D uses the services provided by AWS in their projects. Due this growth in the use of distributed applications it is important to consider the developers and system administrators' roles, who have to work and do the maintenance of all the remote machines from one or several projects from a single local machine. The main goal of this project is to help with these tasks making them as comfortable and automatically as possible. Specifically, the goal of this project is the design and implementation of a software solution that helps to achieve a better productivity in the development of applications on a set of remote machines from a single local machine, based on a proof of concept developed before, in which the basic functionality of the libraries used in this tool were tested. Throughout this project the different alternatives on the market that offer at least part of the solution to the problem addressed have been studied, although according to the requirements of the company, the tool should be implemented from scratch. After that, the basic proof of concept was thoroughly studied and improved with the knowledge acquired on the subject, fulfilling the marked goals. Once the development and full implementation of the tool is done, some ways of improvement for the future are suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important part of human intelligence, both historically and operationally, is our ability to communicate. We learn how to communicate, and maintain our communicative skills, in a society of communicators – a highly effective way to reach and maintain proficiency in this complex skill. Principles that might allow artificial agents to learn language this way are in completely known at present – the multi-dimensional nature of socio-communicative skills are beyond every machine learning framework so far proposed. Our work begins to address the challenge of proposing a way for observation-based machine learning of natural language and communication. Our framework can learn complex communicative skills with minimal up-front knowledge. The system learns by incrementally producing predictive models of causal relationships in observed data, guided by goal-inference and reasoning using forward-inverse models. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime TV-style interview, using multimodal communicative gesture and situated language to talk about recycling of various materials and objects. S1 can learn multimodal complex language and multimodal communicative acts, a vocabulary of 100 words forming natural sentences with relatively complex sentence structure, including manual deictic reference and anaphora. S1 is seeded only with high-level information about goals of the interviewer and interviewee, and a small ontology; no grammar or other information is provided to S1 a priori. The agent learns the pragmatics, semantics, and syntax of complex utterances spoken and gestures from scratch, by observing the humans compare and contrast the cost and pollution related to recycling aluminum cans, glass bottles, newspaper, plastic, and wood. After 20 hours of observation S1 can perform an unscripted TV interview with a human, in the same style, without making mistakes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel tablet based end-user interface for industrial robot programming (called Hammer). This application makes easier to program tasks for industrial robots like polishing, milling or grinding. It is based on the Scratch programming language, but specifically design and created for Android OS. It is a visual programming concept that allows non-skilled programmer operators to create programs. The application also allows to monitor the tasks while it is being executed by overlapping real time information through augmented reality. The application includes a teach pendant screen that can be customized according to the operator needs at every moment.