986 resultados para rapid evolution


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La rápida evolución experimentada en los últimos años por las tecnologías de Internet ha estimulado la proliferación de recursos software en varias disciplinas científicas, especialmente en bioinformática. En la mayoría de los casos, la tendencia actual es publicar dichos recursos como servicios accesibles libremente a través de Internet, utilizando tecnologías y patrones de diseño definidos para la implementación de Arquitecturas Orientadas a Servicios (SOA). La combinación simultánea de múltiples servicios dentro de un mismo flujo de trabajo abre la posibilidad de crear aplicaciones potencialmente más útiles y complejas. La integración de dichos servicios plantea grandes desafíos, tanto desde un punto de vista teórico como práctico, como por ejemplo, la localización y acceso a los recursos disponibles o la coordinación entre ellos. En esta tesis doctoral se aborda el problema de la identificación, localización, clasificación y acceso a los recursos informáticos disponibles en Internet. Con este fin, se ha definido un modelo genérico para la construcción de índices de recursos software con información extraída automáticamente de artículos de la literatura científica especializada en un área. Este modelo consta de seis fases que abarcan desde la selección de las fuentes de datos hasta el acceso a los índices creados, pasando por la identificación, extracción, clasificación y “curación” de la información relativa a los recursos. Para verificar la viabilidad, idoneidad y eficiencia del modelo propuesto, éste ha sido evaluado en dos dominios científicos diferentes—la BioInformática y la Informática Médica—dando lugar a dos índices de recursos denominados BioInformatics Resource Inventory (BIRI) y electronic-Medical Informatics Repository of Resources(e-MIR2) respectivamente. Los resultados obtenidos de estas aplicaciones son presentados a lo largo de la presente tesis doctoral y han dado lugar a varias publicaciones científicas en diferentes revistas JCR y congresos internacionales. El impacto potencial y la utilidad de esta tesis doctoral podrían resultar muy importantes teniendo en cuenta que, gracias a la generalidad del modelo propuesto, éste podría ser aplicado en cualquier disciplina científica. Algunas de las líneas de investigación futuras más relevantes derivadas de este trabajo son esbozadas al final en el último capítulo de este libro. ABSTRACT The rapid evolution experimented in the last years by the Internet technologies has stimulated the proliferation of heterogeneous software resources in most scientific disciplines, especially in the bioinformatics area. In most cases, current trends aim to publish those resources as services freely available over the Internet, using technologies and design patterns defined for the implementation of Service-Oriented Architectures (SOA). Simultaneous combination of various services into the same workflow opens the opportunity of creating more complex and useful applications. Integration of services raises great challenges, both from a theoretical to a practical point of view such as, for instance, the location and access to the available resources or the orchestration among them. This PhD thesis deals with the problem of identification, location, classification and access to informatics resources available over the Internet. On this regard, a general model has been defined for building indexes of software resources, with information extracted automatically from scientific articles from the literature specialized in the area. Such model consists of six phases ranging from the selection of data sources to the access to the indexes created, covering the identification, extraction, classification and curation of the information related to the software resources. To verify the viability, feasibility and efficiency of the proposed model, it has been evaluated in two different scientific domains—Bioinformatics and Medical Informatics—producing two resources indexes named BioInformatics Resources Inventory (BIRI) and electronic-Medical Informatics Repository of Resources (e-MIR2) respectively. The results and evaluation of those systems are presented along this PhD thesis, and they have produced different scientific publications in several JCR journals and international conferences. The potential impact and utility of this PhD thesis could be of great relevance considering that, thanks to the generality of the proposed model, it could be successfully extended to any scientific discipline. Some of the most relevant future research lines derived from this work are outlined at the end of this book.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Es, en el encuentro de los edificios con el terreno, donde el suelo como realidad se transforma en cualidad arquitectónica. La presente tesis aborda el estudio del plano del suelo, haciendo una revisión crítica de la arquitectura como mecanismo de pensamiento proyectual. Este análisis se enmarca a partir de los años sesenta, por considerar que es cuando comienza a evidenciarse la ruptura respecto a la herencia del Movimiento Moderno. Es entonces cuando la arquitectura marca un punto de inflexión, y empiezan a surgir diferentes actitudes metodológicas respecto al suelo, totalmente nuevas. Las clásicas acciones de encuentro como posar, elevar o enterrar fueron poco a poco sustituidas por otras más complejas como plegar, inclinar o esponjar. Utilizando como marco de restricción los encuentros o desencuentros del objeto arquitectónico con el terreno, se analiza el suelo como estrategia arquitectónica tratando de demostrar como su manipulación puede ser una eficaz herramienta con la que establecer relaciones específicas con el lugar. La capacidad que presenta el suelo, como elemento arquitectónico, de explorar y modificar las características de cada entorno, hacen de esta superficie una eficiente forma de contextualización. Por tanto, la manipulación del suelo que aquí se plantea, opera transcodificando los elementos específicos de cada lugar y actúa como estrategia arquitectónica que pone en relación al edificio con el contexto, modificando las particularidades formales de dicho plano. Frente a la tendencia que reduce la expresión arquitectónica a una simple apariencia formal autónoma, se plantea la manipulación del plano del suelo como mecanismo de enraizamiento con el lugar, enfatizando para ello la condición terrestre de la arquitectura. El plano del suelo es el que ata al edificio mediante la gravedad a la corteza terrestre. En realidad se trata de realzar el carácter mediador del suelo en la arquitectura, definiendo para ello el establecimiento de elementos comunes entre realidades distintas, potenciando el valor del suelo como herramienta que puede transcodificar el entorno, trasformando su datos en elementos arquitectónicos concretos. En este proceso de traducción de información, el suelo pasa de ser un objeto pasivo a ser uno operativo, convirtiéndose en parte activa de las acciones que sobre él se ejercen. La tesis tiene también como propósito demostrar cómo, la clave de la rápida evolución que el suelo como estrategia arquitectónica ha sufrido en los últimos años, mucho debe a la expansión del suelo en otras artes como en la escultura, y mas concretamente en el landart. Surgen entonces nuevas disciplinas, en las que se propone la comprensión del lugar en los proyectos desarrollando una visión integral del mundo natural, convirtiéndolo en un tejido viviente y conector que pone en relación las actividades que sustenta. También encontramos en Utzon, y sus plataformas geológicas, al precursor de la importancia que más tarde se le daría al plano del suelo en la arquitectura, ya que inicia cierta actitud crítica, que hizo avanzar hacia una arquitectura más expresiva que requería nuevos mecanismos que la relacionasen con el suelo que los soportaba, proponiendo con sus plataformas una transformación infraestructural del suelo. Con su interpretación transcultural de las estructuras espaciales arquetípicas mayas, chinas y japonesas, irá enriqueciendo el panorama arquitectónico, adquiriendo de este modo más valor el contexto que acabará por ser entendido de forma más compleja. Los proyectos de arquitectura en muchas ocasiones se han convertido en territorios propicios de especulación donde construir teoría arquitectónica. Desde este contexto se analizan cuatro estrategias de suelo a través del estudio de cuatro posiciones arquitectónicas muy significativas desde el punto de vista de la manipulación del plano del suelo, que construyen una interesante metodología proyectual con la que operar. Los casos de estudio, propuestos son; la Terminal Pasajeros (1996-2002) en Yokohama del estudio FOA, la Casa de la Música (1999-2005) de OMA en Oporto, el Memorial Judío (1998-2005) de Berlín de Peter Eisenman, y por último el Museo MAXXI (1998-2009) de Zaha Hadid en Roma. Descubrir las reglas, referencias y metodologías que cada uno nos propone, nos permite descubrir cuáles son los principales posicionamientos en relación al proyecto y su relación con el lugar. Las propuestas aquí expuestas abordan una nueva forma de entender el suelo, que hizo avanzar a la arquitectura hacia nuevos modos de encuentro con el terreno. Nos permite también establecer cuáles son las principales aportaciones arquitectónicas del suelo, como estrategia arquitectónica, que han derivado en su reformulación. Dichas contribuciones abren nuevas formas de abordar la arquitectura basadas en el movimiento y en la flexibilidad funcional, o en la superposición de flujos de información y circulación. También plantean nuevas vías desdibujando la figura contra el fondo, y refuerzan la idea del suelo como plataforma infraestructural que ya había sido enunciada por Utzon. Se trata en definitiva de proponer la exploración de la superficie del suelo como el elemento más revelador de las formas emergentes del espacio. ABSTRACT Where the building hits the ground, it is where the latter as reality becomes architectural quality. This thesis presents the study of the ground plane, making a critical review of architecture as a mechanism of projectual thought. This analysis starts from the sixties, because it is when the break begins to be demonstrated with regard to the inheritance of the Modern Movement. It is then when architecture marks a point of inflexion, and different, completely new methodological attitudes to the ground start to emerge. The classic meeting action like place, raise or bury are gradually replaced by more complex operations such as fold, bend or fluff up. Framing it within the meetings or disagreements between architectural object and ground, we analyzed it as architectural strategy trying to show how handling can be an effective tool to establish a specific relationship with the place. The capacity ground has, as an architectural element, to explore and modify the characteristics of each environment, makes this area an efficient tool for contextualization. Therefore, the manipulation of ground that is analyzed here, operates transcoding the specifics of each place and acts as architectural strategy that relates to the building with the context, modifying the structural peculiarities of such plane. Opposite to the reductive tendency of the architectural expression to a simple formal autonomous appearance, the manipulation of the ground plane is considered as a rooting mechanism place that emphasises the earthly condition of architecture. The ground plane is the one that binds the building by gravity to the earth’s crust. In fact, it tries to study the mediating character of the ground in architecture, defining for it to establish commonalities among different realities, enhancing the value of the ground as a tool that can transcode the environment, transforming its data into specific architectural elements. In this process of translating information, the ground goes from being a liability, to become active part of the actions exerted on the object. The thesis also tries to demonstrate how the key of the rapid evolution that the ground likes architectural strategy has gone through recently, much due to its use expansion in other arts such as sculpture. New disciplines arise then, in which one proposes the local understanding in the projects, developing an integral vision of the natural world and turning it into an element linking the activities it supports. We also find in Utzon, and his geological platforms, the precursor of the importance that later would be given to the ground plane in architecture, since it initiates a certain critical attitude, which advances towards a more expressive architecture, with new mechanisms that relate to the ground that it sits in, proposing with its platforms an infrastructural transformation of the ground. With his transcultural interpretation of the spatial archetypal structures, he will enrich the architectural discourse, making the context become understood in more complex ways. Architectural projects in many cases have become territories prone to architectural theory speculation. Within this context, four strategies are analyzed through the study of four very significant architectural positions, from the point of view of handling the ground plane, and the project methodology within which to operate. The case studies analyzed are; Passenger Terminal (1996-2002) in Yokohama from FOA, The House of the music (1999-2005) the OMA in Oporto, The Jew monument (1998-2005) in Berlin the Peter Eisenman, and finally the MAXXI Museum (1998-2009) the Zaha Hadid in Rome. Discovering the rules, references and methodologies that each of those offer, it allows to discover what the main positions are regarding the building and its relationship with the place where it is located. The proposals exposed here try to shed a different light on the ground, which architecture advancing in new ways on how meet it. The crossing of the different studied positions, allows us to establish what the main contributions of ground as architectural strategy are. Such contributions open up new approaches to architecture based on movement and functional flexibility, overlapping information flow and circulation, consider new ways that blur the figure against the background, and reinforce the idea of ground as infrastructural platform, already raised by Utzon. Summarizing, it tries to propose the exploration of the ground plane as the most revealing form of spatial exploration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El atrio incorporado en los edificios ha sido un recurso espacial que tempranamente se difundió a nivel global, siendo adoptado por las distintas arquitecturas locales en gran parte del mundo. Su masificación estuvo favorecida primero por la rápida evolución de la tecnología del acero y el vidrio, a partir del siglo XIX, y en segundo termino por el posterior desarrollo del hormigón armado. Otro aspecto que explica tal aceptación en la arquitectura contemporánea, es de orden social y radica en la llamativa cavidad del espacio describiendo grandes dimensiones y favoreciendo con ello, el desarrollo de una multiplicidad de usos en su interior que antes eran impensados. Al interior del atrio, la luz natural es clave en las múltiples vivencias que alberga y sea tal vez la condición ambiental más valorada, ya que entrega una sensación de bienestar al conectarnos visualmente con el ambiente natural. Por esta razón de acuerdo al método hipotético deductivo, se evaluaron los efectos de la configuración geométrica, la cubierta y la orientación en el desempeño de la iluminación natural en la planta baja, a partir un modelo extraído desde el inventario de los edificios atrio construidos en Santiago de Chile, en los últimos 30 años que fue desarrollado en el capitulo 2. El análisis cuantitativo de los edificios inventariados se elaboró en el capítulo 3, considerando las dimensiones de los atrios. Simultáneamente fueron clasificados los aspectos constructivos, los materiales y las características del ambiente interior de cada edificio. En esta etapa además, fueron identificadas las variables de estudio de las proporciones geométricas de la cavidad del atrio con los coeficientes de aspecto de las proporciones, en planta (PAR), en corte (SAR) y de la cavidad según (WI), (AR) y (RI). Del análisis de todos estos parámetros se extrajo el modelo de prueba. El enfoque del estudio del capítulo 4 fue la iluminación natural, se revisaron los conceptos y el comportamiento en el atrio, a partir de un modelo físico construido a escala para registro de la iluminancia bajo cielo soleado de la ciudad. Más adelante se construyó el modelo en ambiente virtual, relacionando las variables determinadas por la geometría de la cavidad y el cerramiento superior; examinándose de esta manera distintas transparencias, proporciones de apertura, en definitiva se evaluó un progresivo cerramiento de las aberturas, verificando el ingreso de la luz y disponibilidad a nivel de piso con la finalidad, de proveer lineamientos útiles en una primera etapa del diseño arquitectónico. Para el análisis de la iluminación natural se revisaron diferentes métodos de cálculo con el propósito de evaluar los niveles de iluminancia en un plano horizontal al interior del atrio. El primero de ellos fue el Factor de Luz Día (FLD) que corresponde, a la proporción de la iluminancia en un punto de evaluación interior respecto, la cantidad proveniente del exterior bajo cielo nublado, a partir de la cual se obtuvo resultados que revelaron la alta luminosidad del cielo nublado de la ciudad. Además fueron evaluadas las recientes métricas dinámicas que dan cuenta, de la cantidad de horas en las cuales de acuerdo a los extensos registros meteorológico de la ciudad, permitieron obtener el porcentajes de horas dentro de las cuales se cumplió el estándar de iluminancia requerido, llamado autonomía lumínica (DA) o mejor aún se permanece dentro de un rango de comodidad visual en el interior del atrio referido a la iluminancia diurna útil (UDI). En el Capítulo 5 se exponen los criterios aplicados al modelo de estudio y cada una de las variantes de análisis, además se profundizó en los antecedentes y procedencia de las fuentes de los registros climáticos utilizados en las simulaciones llevadas a cabo en el programa Daysim operado por Radiance. Que permitieron evaluar el desempeño lumínico y la precisión, de cada uno de los resultados para comprobar la disponibilidad de iluminación natural a través de una matriz. En una etapa posterior se discutieron los resultados, mediante la comparación de los datos logrados según cada una de las metodologías de simulación aplicada. Finalmente se expusieron las conclusiones y futuras lineas de trabajo, las primeras respecto el dominio del atrio de cuatro caras, la incidencia del control de cerramiento de la cubierta y la relación establecida con la altura; indicando en lo específico que las mediciones de iluminancia bajo el cielo soleado de verano, permitieron aclarar, el uso de la herramienta de simulación y método basado en el clima local, que debido a su reciente desarrollo, orienta a futuras líneas de trabajo profundizando en la evaluación dinámica de la iluminancia contrastado con monitorización de casos. ABSTRACT Atriums incorporated into buildings have been a spatial resource that quickly spread throughout the globe, being adopted by several local architecture methods in several places. Their widespread increase was highly favored, in the first place, with the rapid evolution of steel and glass technologies since the nineteen century, and, in second place, by the following development of reinforced concrete. Another issue that explains this success into contemporary architecture is associated with the social approach, and it resides in the impressive cavity that describes vast dimensions, allowing the development of multiple uses in its interior that had never been considered before. Inside the atrium, daylight it is a key element in the many experiences that involves and it is possibly the most relevant environmental factor, since it radiates a feeling of well-being by uniting us visually with the natural environment. It is because of this reason that, following the hypothetical deductive method, the effects in the performance of daylight on the floor plan were evaluated considering the geometric configuration, the deck and orientation factors. This study was based in a model withdrawn from the inventory of atrium buildings that were constructed in Santiago de Chile during the past thirty years, which will be explained later in chapter 2. The quantitative analysis of the inventory of those buildings was elaborated in chapter 3, considering the dimensions of the atriums. Simultaneously, several features such as construction aspects, materials and environmental qualities were identified inside of each building. At this stage, it were identified the variables of the geometric proportions of the atrium’s cavity with the plan aspect ratio of proportions in first plan (PAR), in section (SAR) and cavity according to well index (WI), aspect ratio (AR) and room index (RI). An experimental model was obtained from the analysis of all the mentioned parameters. The main focus of the study developed in chapter 4 is daylight. The atrium’s concept and behavior were analyzed from a physical model built under scale to register the illuminances under clear, sunny sky of the city. Later on, this physical model was built in a virtual environment, connecting the variables determined by the geometry of the cavity and the superior enclosure, allowing the examination of transparencies and opening proportions. To summarize, this stage consisted on evaluating a progressive enclosure of the openings, checking the access of natural light and its availability at the atrium floor, in an effort to provide useful guidelines during the first stage of the architectural design. For the analysis of natural lighting, several calculations methods were used in order to determine the levels of illuminances in a horizontal plane inside of the atrium. The first of these methods is the Daylight Factor (DF), which consists in the proportion of light in an evaluation interior place with the amount of light coming from the outside in a cloudy day. Results determined that the cloudy sky of the city has high levels of luminosity. In addition, the recent dynamic metrics were evaluated which reflects the hours quantity. According to the meteorological records of the city’s climate, the standard of illuminance- a standard measure called Daylight Autonomy (DA) – was met. This is even better when the results stay in the line of visual convenience within the atrium, which is referred to as Useful Daylight Illuminance (UDI). In chapter 5, it was presented the criteria applied to the study model and on each of the variants of the analysis. Moreover, the information of the climate records used for the simulations - carried out in the Daysim program managed by Radiance – are detailed. These simulations allowed the observation of the daylight performance and the accuracy of each of the results to confirm the availability of natural light through a matrix. In a later stage, the results were discussed comparing the collected data in each of the methods of simulation used. Finally, conclusions and further discussion are presented. Overall, the four side atrium’s domain and the effect of the control of the cover’s enclosure. Specifically, the measurements of the daylight under summer’s clear, sunny sky allowing clarifying the use of the simulation tool and the method based on the local climate. This method allows defining new and future lines of work deepening on the dynamic of the light in contrast with the monitoring of the cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

España se incorporó a la técnica del hormigón armado con más de dos décadas de retraso respecto a Francia o Alemania. En 1890, en Europa se construían ya estructuras de hormigón armado de cierta envergadura y complejidad. En España hubo que esperar hasta 1893 para la primera obra en hormigón armado, que fue un sencillo depósito descubierto en Puigverd (Lérida), ejecutado por el ingeniero militar Francesc Macià con patente Monier. En 1898, de la mano de Hennebique, se empezó la construcción de los dos primeros edificios con estructura de hormigón armado en España. Fueron dos obras puntuales, con proyectos importados de Francia, pero necesarias para introducir de manera definitiva el material. En paralelo, en París, se estaban edificando en hormigón armado la mayoría de los pabellones de la Exposición Universal de 1900. En el cambio de siglo, las construcciones de hormigón armado habían alcanzado ya la madurez proyectual y técnica en Europa. A pesar de la incorporación tardía, se puede constatar por las obras ejecutadas que en un periodo corto de tiempo, entre 1901 y 1906, se alcanzó en España prácticamente el mismo nivel técnico y constructivo que tenían el resto de los países que fueron pioneros en el empleo del hormigón armado. El desarrollo e implantación de una técnica constructiva no es un proceso lineal, y son muchos los factores que intervienen. Las patentes tuvieron una gran importancia en el desarrollo inicial del hormigón armado. Estas ofrecían un producto que funcionaba. Las primeras estructuras de hormigón armado no se calculaban y se construían siguiendo una reglamentación, se compraban. Y el resultado de esa “compra” solía ser, en la mayoría de los casos, satisfactorio. Las patentes vendían sistemas estructurales cuyo funcionamiento estaba corroborado por la experiencia y la pericia de su inventor. Esta investigación parte de la hipótesis de que las patentes sobre cemento y hormigón armado depositadas en España entre 1884 y 1906 fueron uno de los factores que proporcionaron a los técnicos y a las empresas españolas una pericia constructiva sólida en el empleo del hormigón armado. En este trabajo se aborda el estudio del proceso de introducción del hormigón armado en España desde una perspectiva fundamentalmente técnica, incorporando las patentes como una de las razones constructivas que explican su rápida evolución y generalización en un periodo de tiempo breve: 1901-1906. En este proceso se contextualiza y analiza una de las figuras que se considera fundamental en los primeros años del hormigón armado en España, la del ingeniero Juan Manuel de Zafra y Estevan. Esta tesis analiza las patentes de hormigón armado desde el punto de vista estadístico y constructivo. Desde ambas perspectivas se verifica la hipótesis de partida de esta investigación, concluyendo que las patentes fueron una de las razones constructivas de la evolución del hormigón armado en España y de su rápida implantación. ABSTRACT Spain incorporated the reinforced concrete technique more than two decades after France and Germany. In central Europe reinforced concrete structures of considerable size and complexity were being built in 1890, while in Spain it was not until 1893 that the first work, a simple open air water tank, was implemented in Puigverd (Lleida) by the military engineer Francesc Macià with a Monier patent. In 1898 the construction of the first two buildings with reinforced concrete structure in Spain started, with the guidance by Hennebique. They were two isolated cases with projects imported from France, but playing a key role to definitively introduce the material in Spain. In parallel, in Paris, most of the pavilions of the 1900 World Expo were being built in reinforced concrete. At the turn of the century reinforced concrete buildings had reached maturity both as a technology and as a design practice. Despite the late assumption of the material, the works carried out in the very short period between 1901 and 1906 clearly show that Spain reached practically the same technical and constructive level as the other pioneering countries in the use of reinforced concrete. The development and implementation of a constructive technique is never a linear process, there are many factors involved. The patents offered a successful product. Initial reinforced concrete structures were not calculated and built according to regulations, they were bought. And this purchase in most cases was satisfactory for the required use. Patents sold structural systems whose performance was supported by the experience and expertise of its inventor. The hypothesis of this research is based upon the assumption that the cement and concrete patents registered in Spain between 1884 and 1906 were one of the factors that provided Spanish technicians and companies with a solid constructive expertise in the use of reinforced concrete. This investigation studies the introduction of reinforced concrete to Spain from a predominantly technical perspective, incorporating patents as the constructive reason for the rapid evolution and spread in such a short period of time: 1901-1906. Along the way, the role of engineer J. M. de Zafra, generally considered a key agent in the initial years of reinforced concrete in Spain, is contextualized and analyzed. This dissertation analyzes the patents of reinforced concrete from a statistical and constructive point of view. From both perspectives the hypothesis of this research is verified, concluding that patents were one of the constructive reasons for the development of reinforced concrete in Spain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Plasmodium falciparum, the agent of malignant malaria, is one of mankind’s most severe scourges. Efforts to develop preventive vaccines or remedial drugs are handicapped by the parasite’s rapid evolution of drug resistance and protective antigens. We examine 25 DNA sequences of the gene coding for the highly polymorphic antigenic circumsporozoite protein. We observe total absence of silent nucleotide variation in the two nonrepeated regions of the gene. We propose that this absence reflects a recent origin (within several thousand years) of the world populations of P. falciparum from a single individual; the amino acid polymorphisms observed in these nonrepeat regions would result from strong natural selection. Analysis of these polymorphisms indicates that: (i) the incidence of recombination events does not increase with nucleotide distance; (ii) the strength of linkage disequilibrium between nucleotides is also independent of distance; and (iii) haplotypes in the two nonrepeat regions are correlated with one another, but not with the central repeat region they span. We propose two hypotheses: (i) variation in the highly polymorphic central repeat region arises by mitotic intragenic recombination, and (ii) the population structure of P. falciparum is clonal—a state of affairs that persists in spite of the necessary stage of physiological sexuality that the parasite must sustain in the mosquito vector to complete its life cycle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The English language and the Internet, both separately and taken together, are nowadays well-acknowledged as powerful forces which influence and affect the lexico-grammatical characteristics of other languages world-wide. In fact, many authors like Crystal (2004) have pointed out the emergence of the so-called Netspeak, that is, the language used in the Net or World Wide Web; as Crystal himself (2004: 19) puts it, ‘a type of language displaying features that are unique to the Internet […] arising out of its character as a medium which is electronic, global and interactive’. This ‘language’, however, may be differently understood: either as an adaptation of the English language proper to internet requirements and purposes, or as a new and rapidly-changing and developing language as a result of a rapid evolution or adaptation to Internet requirements of almost all world languages, for whom English is a trendsetter. If the second and probably most plausible interpretation is adopted, there are three salient features of ‘Netspeak’: (a) the rapid expansion of all its new linguistic developments thanks to the Internet itself, which may lead to the generalization and widespread acceptance of new words, coinages, or meanings, hundreds of times faster than was the case with the printed media. As said above, (b) the visible influence of English, the most prevalent language on the Internet. Consequently, (c) this new language tends to reduce the ‘distance’ between English and other languages as well as the ignorance of the former by speakers of other languages, since the ‘Netspeak’ version of the latter adopts grammatical, syntactic and lexical features of English. Thus, linguistic differences may even disappear when code-switching and/or borrowing occurs, as whole fragments of English appear in other language contexts. As a consequence of the new situation, an ideal context appears for interlanguage or multilingual word formation to thrive: puns, blends, compounds and word creativity in general find in the web the ideal place to gain rapid acceptance world-wide, as a result of fashion, coincidence, or sheer merit of the new linguistic proposals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tese de mestrado, Epidemiologia, Universidade de Lisboa, Faculdade de Medicina, 2015

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There has been much recent interest in the origin of silicic magmas at spreading centres away from any possible influence of continental crust. Here we present major and trace element data for 29 glasses (and 55 whole-rocks) sampled from a 40 km segment of the South East Rift in the Manus Basin that span the full compositional continuum from basalt to rhyolite (50-75 wt % SiO2). The glass data are accompanied by Sr-Nd-Pb, O and U-Th-Ra isotope data for selected samples. These overlap the ranges for published data from this part of the Manus Basin. Limited increases in Cl/K ratios with increasing SiO2, La-SiO2 and Yb-SiO2 relationships, and the oxygen isotope data rule out models in which the more silicic lavas result from partial melting of altered oceanic crust or altered oceanic gabbros. Rather, the data form a coherent array that is suggestive of closed-system fractional crystallization and this is well simulated by MELTS models run at 0.2 GPa and QFM (quartz-fayalite-magnetite buffer) with 1 wt % H2O, using a parental magma chosen from the basaltic glasses. Although some assimilation of altered oceanic crust or gabbro cannot be completely ruled out, there is no evidence that this plays an important role in the origin of the silicic lavas. The U-series disequilibria are dominated by 238U and 226Ra excesses that limit the timescale of differentiation to less than a few millennia. Overall, the data point to rapid evolution in relatively small magma lenses located near the base of thick oceanic crust; we speculate that this was coupled with relatively low rates of basaltic recharge. A similar model may be applicable to the generation of silicic magmas elsewhere in the ocean basins.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Alignments of homologous genomic sequences are widely used to identify functional genetic elements and study their evolution. Most studies tacitly equate homology of functional elements with sequence homology. This assumption is violated by the phenomenon of turnover, in which functionally equivalent elements reside at locations that are nonorthologous at the sequence level. Turnover has been demonstrated previously for transcription-factor-binding sites. Here, we show that transcription start sites of equivalent genes do not always reside at equivalent locations in the human and mouse genomes. We also identify two types of partial turnover, illustrating evolutionary pathways that could lead to complete turnover. These findings suggest that the signals encoding transcription start sites are highly flexible and evolvable, and have cautionary implications for the use of sequence-level conservation to detect gene regulatory elements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this thesis is to examine the approach of the Parti Communiste Francais (from 1956 to 1982) to the emergence of new strata of salaried `intellectual workers' (technicians, engineers, low to middle managers in industry and commerce, scientific researchers, teachers etc) parallelled by the gradual diminution of the traditional industrial working class which forms the core of the Party's support base. This examination is carried out in the context of the debate in France (initiated in the 1950s by social theorists of the Left) on the class membership and role of these strata. The reason for the emergence of such a debate is that in a society given to both a rapid evolution of its social structure and an increased polarisation between Left and Right, a precise knowledge of the objective and subjective determinations of new strata would enable parties to the Left to make proper distinctions between potential allies and adversaries. The thesis posits the view that the PCF has failed to make correct distinctions between its potential allies and adversaries and has thus pursued unsuccessful alliance strategies. The thesis contributes towards a scientifically-based understanding of one of the reasons governing the PCF's steady decline since the 1950s.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Various nondestructive testing (NDT) technologies for construction and performance monitoring have been studied for decades. Recently, the rapid evolution of wireless sensor network (WSN) technologies has enabled the development of sensors that can be embedded in concrete to monitor the structural health of infrastructure. Such sensors can be buried inside concrete and they can collect and report valuable volumetric data related to the health of a structure during and/or after construction. Wireless embedded sensors monitoring system is also a promising solution for decreasing the high installation and maintenance cost of the conventional wire based monitoring systems. Wireless monitoring sensors need to operate for long time. However, sensor batteries have finite life-time. Therefore, in order to enable long operational life of wireless sensors, novel wireless powering methods, which can charge the sensors’ rechargeable batteries wirelessly, need to be developed. The optimization of RF wireless powering of sensors embedded in concrete is studied here. First, our analytical results focus on calculating the transmission loss and propagation loss of electromagnetic waves penetrating into plain concrete at different humidity conditions for various frequencies. This analysis specifically leads to the identification of an optimum frequency range within 20–80 MHz that is validated through full-wave electromagnetic simulations. Second, the effects of various reinforced bar configurations on the efficiency of wireless powering are investigated. Specifically, effects of the following factors are studied: rebar types, rebar period, rebar radius, depth inside concrete, and offset placement. This analysis leads to the identification of the 902–928 MHz ISM band as the optimum power transmission frequency range for sensors embedded in reinforced concrete, since antennas working in this band are less sensitive to the effects of varying humidity as well as rebar configurations. Finally, optimized rectennas are designed for receiving and/or harvesting power in order to charge the rechargeable batteries of the embedded sensors. Such optimized wireless powering systems exhibit significantly larger efficiencies than the efficiencies of conventional RF wireless powering systems for sensors embedded in plain or reinforced concrete.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A wide range of non-destructive testing (NDT) methods for the monitoring the health of concrete structure has been studied for several years. The recent rapid evolution of wireless sensor network (WSN) technologies has resulted in the development of sensing elements that can be embedded in concrete, to monitor the health of infrastructure, collect and report valuable related data. The monitoring system can potentially decrease the high installation time and reduce maintenance cost associated with wired monitoring systems. The monitoring sensors need to operate for a long period of time, but sensors batteries have a finite life span. Hence, novel wireless powering methods must be devised. The optimization of wireless power transfer via Strongly Coupled Magnetic Resonance (SCMR) to sensors embedded in concrete is studied here. First, we analytically derive the optimal geometric parameters for transmission of power in the air. This specifically leads to the identification of the local and global optimization parameters and conditions, it was validated through electromagnetic simulations. Second, the optimum conditions were employed in the model for propagation of energy through plain and reinforced concrete at different humidity conditions, and frequencies with extended Debye's model. This analysis leads to the conclusion that SCMR can be used to efficiently power sensors in plain and reinforced concrete at different humidity levels and depth, also validated through electromagnetic simulations. The optimization of wireless power transmission via SMCR to Wearable and Implantable Medical Device (WIMD) are also explored. The optimum conditions from the analytics were used in the model for propagation of energy through different human tissues. This analysis shows that SCMR can be used to efficiently transfer power to sensors in human tissue without overheating through electromagnetic simulations, as excessive power might result in overheating of the tissue. Standard SCMR is sensitive to misalignment; both 2-loops and 3-loops SCMR with misalignment-insensitive performances are presented. The power transfer efficiencies above 50% was achieved over the complete misalignment range of 0°-90° and dramatically better than typical SCMR with efficiencies less than 10% in extreme misalignment topologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nos encontramos a escasos 8 años del siglo XXI y la información que cobra día a día más auge y más importancia, el acelerado devenir tecnológico y los descubrimientos científicos, hacen que el bibliotecario sea un transmisor de innovación y comunicación que se desenvuelva en un mundo competitivo, en donde debe ser agresivo, dinámico y capaz de adoptar todo ese cúmulo tecnológico y científico si quiere sobrevivir en el futuro como profesional.Si retrocedemos cinco años, nos damos cuenta que la Bibliotecología es una de las disciplinas que más ha evolucionado con respecto a términos relacionados con gestión automatizada de información. Palabras como scanners, videodisco, reconocimiento de caracteres ópticos, CD-ROM, CD-I (Disco compacto interactivo), etc., forman parte del vocabulario bibliotecológico que ha sido incorporado por los profesionales quienes se desenvuelven en el complicado mundo de la información.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses an ongoing creative and conceptual collaboration between three authors, in which poetry has been approached as a way of exploring how lived experience and language are being transformed by the rapid evolution of virtual reality and its lexicon. We recognise, via Bakhtin, that language is always shared, in-use and redolent with multiple meanings. We acknowledge that we have written within a metaphorical space where we, as avatars of ourselves, use word processing software loaded with its own metaphors of page and print. The poems we have collaborated on have interrupted the increasing invisibility of metaphors such as ‘cloud’ and ‘screen’ as applied to technology, by working in the disjunction between metaphor and what it describes. We now reflect on the collaborative process and on the influence of technology on our practice, whilst maintaining a collaborative strategy. The paper explores the poetics of longing (Stewart) and Baudrillard’s simulacra and argues that concerns over remembering the real and the effects of nostalgia are offset by the generative potential of collaborative writing and its surprising forms of heteroglossia, which have exciting possibilities for creative practice.