933 resultados para Analog-to-digital converter (ADC)
Resumo:
Identity is a recurrent research interest in current sociolinguistics and it is also of primary interest in digital discourse studies. Identity construction is closely related to stance and style (Eckert 2008; Jaffe 2009), which are fundamental concepts for understanding the language use and its social meanings in the case of social media users from Malaga. As the specific social meanings of a set of dialect features constitute a style, this style and the social (and technological) context in which the variants are used determine the meanings that are actually associated with each variant. Hence, every variant has its own indexical field covering any number of potential meanings. The Spanish spoken in Malaga, as Andalusian Spanish in general, was in the past often times considered an incorrect, low prestige variety of Spanish which was strongly associated with the poor, rural, backward South of Spain. This southern Spanish variety is easily recognised because of its innovative phonetic features that diverge from the national standard. In this study several of these phonetic dialect features are looked at, which users from Malaga purposefully employ (in a textualised form) on social media for identity construction. This identity construction is analysed through interactional and ethnographic methods: A perception and an imitation task served as key data and were supplemented by answers to a series of open questions. Further data stems from visual, multimodal elements (e.g. images, photos, videos) posted by users from the city of Malaga. The program TAMS Analyzer was used for data codification and analysis. Results show that certain features that in spoken language are considered rural and old-fashioned, acquire new meaning on social media, namely of urbanity and fashion. Moreover, these features, if used online, are associated with hipsters. That is, the “cool” social media index the “coolness” of the dialect features in question and, thus, the mediatisation makes their indexical fields even more multi-layered and dynamic. Social media users from Malaga performatively employ these stylised dialect features to project a hipster identity and certain related stances.
Resumo:
This paper describes the development of an Advanced Speech Communication System for Deaf People and its field evaluation in a real application domain: the renewal of Driver’s License. The system is composed of two modules. The first one is a Spanish into Spanish Sign Language (LSE: Lengua de Signos Española) translation module made up of a speech recognizer, a natural language translator (for converting a word sequence into a sequence of signs), and a 3D avatar animation module (for playing back the signs). The second module is a Spoken Spanish generator from sign-writing composed of a visual interface (for specifying a sequence of signs), a language translator (for generating the sequence of words in Spanish), and finally, a text to speech converter. For language translation, the system integrates three technologies: an example-based strategy, a rule-based translation method and a statistical translator. This paper also includes a detailed description of the evaluation carried out in the Local Traffic Office in the city of Toledo (Spain) involving real government employees and deaf people. This evaluation includes objective measurements from the system and subjective information from questionnaires. Finally, the paper reports an analysis of the main problems and a discussion about possible solutions.
Resumo:
Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.
Resumo:
La instalación de Infraestructuras Comunes de Telecomunicación (IICCTT) en el interior de las edificaciones para el acceso a los servicios de telecomunicación facilitó la incorporación a las viviendas de las nuevas tecnologías de forma económica y transparente para los usuarios. Actualmente, todos los edificios de nueva construcción deben presentar un proyecto ICT firmado por un Ingeniero Técnico de Telecomunicación de la especialidad correspondiente o un Ingeniero de Telecomunicación. La legislación que las regula afecta a todo tipo de viviendas con independencia del poder adquisitivo del comprador, y contribuye de manera decisiva a que disminuyan a corto y medio plazo las desigualdades sociales en lo relativo al acceso a servicios de telecomunicación tales como telefonía, Internet, telecomunicación por cable, radiodifusión sonora y televisión analógica, digital, terrenal o por satélite, etc.. Desde 1997, el Colegio Oficial de Ingenieros de Telecomunicación junto con otras organizaciones públicas y privadas ha participado en la elaboración de la normativa aplicable a las Infraestructuras Comunes de Telecomunicación, dando lugar al actual decreto, el Real Decreto 346/2011, de 11 de Marzo. El propósito general de este proyecto es diseñar una red Wi-Fi a partir de las canalizaciones e instalaciones del proyecto ICT de un conjunto de viviendas unifamiliares, para que todas ellas dispongan de conexión a internet de forma inalámbrica. Para llevar a cabo este diseño, se ha realizado un estudio de las características del estándar IEEE 802.11, conocido como Wi-Fi, analizando las posibilidades de comunicación inalámbrica que ofrece, así como las limitaciones que presenta en la actualidad. Se ha analizado el proyecto ICT del conjunto de viviendas, estudiando la viabilidad de utilizar sus instalaciones para implementar la red Wi-Fi, añadiendo tanto las canalizaciones como los dispositivos comerciales necesarios para llevar a cabo dicha implementación. Además, se ha estudiado la posibilidad de integrar la red Wi-Fi utilizando el cableado de televisión de la propia ICT. Por último, se ha estudiado la gran importancia que al Hogar Digital se da en el Real Decreto 346/2011, de 11 de marzo, por el que se aprueba el Reglamento regulador de las Infraestructuras Comunes de Telecomunicaciones para el acceso a los servicios de telecomunicación en el interior de las edificaciones, presentando los aspectos fundamentales que se persiguen con la domotización de la vivienda como mejora de vida de sus habitantes. Abstract The installation of Telecommunications Common Infrastructures (TCIs, in Spanish Infraestructuras Comunes de Telecomunicación –IICCTT-) in the buildings, in order to gain access to telecommunications services, facilitated the incorporation into the houses of new technologies in an economical and transparent way for users. Nowadays, every new construction building must have a TCI project signed by a Telecommunications Engineer or a Technical Telecommunications Engineer with the appropriate specialization. The legislation that regulates TCIs affects every kind of houses, independently of the buyer´s purchasing power, and contributes decisively to decrease in short and medium terms the social inequalities concerning the access to the telecommunication services, such as telephony, Internet, wired telecommunications, audible broadcasting and digital, analogical, land, satellite television, etc.. Since 1997, the Telecommunications Engineer Official College, together with other public and private organizations, has been elaborating the regulations for the TCIs, giving rise to the current decree, the Royal Decree 346/2011, of 11th of March. The general purpose of this project is to design a Wi-Fi network based on the canalizations and installations of the TCI project of a housing development, in such a way that every house is provided with a wireless connection to the Internet. In order to carry out this design, the characteristics of the standard IEEE 802.11, known as Wi-Fi, have been studied, analyzing the wireless-communication possibilities that it offers, as well as the constraints that it presents currently. The TCI project has been analyzed, studying the feasibility of using its installations to implement the Wi-Fi network, adding the canalizations and commercial devices required to execute the aforementioned implementation. Besides, the possibility of integrating the Wi-Fi network using the television wires of the TCI project has been investigated. Finally, it has been studied the great importance that has been given to Digital Home in the Royal Decree 346/2011, of 11th of March, that approves the regulatory Regulations of Telecommunications Common Infrastructures for the access to telecommunications services inside the buildings, presenting the essential aspects that are pursued with the house domotization as a way to improve the quality of life of its inhabitants.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
This paper presents the SAILSE Project (Sistema Avanzado de Información en Lengua de Signos Española ? Spanish Sign Language Advanced Information System). This project aims to develop an interactive system for facilitating the communication between a hearing and a deaf person. The first step has been the linguistic study, including a sentence collection, its translation into LSE (Lengua de Signos Española - Spanish Sign Language), and sign generation. After this analysis, the paper describes the interactive system that integrates an avatar to represent the signs, a text to speech converter and several translation technologies. Finally, this paper presents the set up carried out with deaf people and the main conclusions extracted from it.
Resumo:
The proposal highlights certain design strategies and a case study that can link the material urban space to digital emerging realms. The composite nature of urban spaces ?material/ digital- is understood as an opportunity to reconfigure public urban spaces without high-cost, difficult to apply interventions and, furthermore, to reactivate them by inserting dynamic, interactive and playful conditions that engage people and re-establish their relations to the cities. The structuring of coexisting and interconnected material and digital aspects in public urban spaces is proposed through the implementation of hybridization processes. Hybrid spaces can fascinate and provoke the public and especially younger people to get involved and interact with physical aspects of urban public spaces as well as digital representations or interpretations of those. Digital game?s design in urban public spaces can be comprehended as a tool that allows architects to understand and to configure hybrids of material and digital conceptions and project all in one, as an inseparable totality. Digital technologies have for a long time now intervened in our perception of traditional dipoles such as subject - environment. Architects, especially in the past, have been responsible for material mediations and tangible interfaces that permit subjects to relate to their physical environments in a controlled and regulated manner; but, nowadays, architects are compelled to embody in design, the transition that is happening in all aspects of everyday life, that is, from material to digital realities. In addition, the disjunctive relation of material and digital realms is ceding and architects are now faced with the challenge that supposes the merging of both in a single, all-inclusive reality. The case study is a design project for a game implemented simultaneously in a specific urban space and on the internet. This project developed as the spring semester course New Media in Architecture at the Department of Architecture, Democritus University of Thrace, Greece is situated at the city of Xanthi. Composite cities can use design strategies and technological tools to configure augmented and appealing urban spaces that articulate and connect different realms in a single engaging reality.
Resumo:
Los retos y oportunidades a los que se enfrentan las organizaciones y administraciones de las primeras décadas del siglo XXI se caracterizan por una serie de fuerzas perturbadoras como la globalización, el avance de las tecnologías emergentes y el desequilibrio económico, que están actuando como impulsores de la transformación del mercado. La acción conjunta de estos factores está obligando a todas las empresas industriales a tener que trabajar con mayores y más exigentes niveles de productividad planteándose continuamente como mejorar y lograr satisfacer los requerimientos de los clientes. De esta situación surge la necesidad de volver a plantearse de nuevo ¿quién es el cliente?, ¿qué valora el cliente? y ¿cómo se pueden generan beneficios sostenibles? La aplicación de esta reflexión a la industria naval militar marca los objetivos a los que esta tesis doctoral busca dar respuesta. El primer objetivo, de carácter general, consiste en la definición de un modelo de negocio sostenible para la industria naval militar del 2025 que se adapte a los requisitos del cliente y al nuevo escenario político, económico, social, tecnológico y ambiental que rodea esta industria. El segundo objetivo, consecuencia del modelo general, trata de desarrollar una metodología para ejecutar programas de apoyo al ciclo de vida del “buque militar”. La investigación se estructura en cuatro partes: en la primera se justifica, por un lado, la necesidad del cambio de modelo y por otro se identifican los factores estructurantes para la definición del modelo. La segunda parte revisa la literatura existente sobre uno de los aspectos básicos para el nuevo modelo, el concepto Producto-Servicio. La tercera parte se centra totalmente en la industria naval militar estudiando los aspectos concretos del sector y, en base al trabajo de campo realizado, se identifican los puntos que más valoran las Marinas de Guerra y como estas gestionan al buque militar durante todo su ciclo de vida. Por último se presentan los principios del modelo propuesto y se desarrollan los pilares básicos para la ejecución de proyectos de Apoyo al Ciclo de Vida (ACV). Como resultado de la investigación, el modelo propuesto para la industria naval militar se fundamenta en once principios: 1. El buque militar (producto de alto valor añadido) debe ser diseñado y construido en un astillero del país que desarrolla el programa de defensa. 2. El diseño tiene que estar orientado al valor para el cliente, es decir, se tiene que diseñar el buque militar para que cumpla su misión, eficaz y eficientemente, durante toda su vida operativa, asegurando la seguridad del buque y de las personas y protegiendo el medio ambiente de acuerdo con las regulaciones vigentes. 3. La empresa debe suministrar soluciones integrales de apoyo al ciclo de vida al producto. 4. Desarrollar y mantener las capacidades de integración de sistemas complejos para todo el ciclo de vida del buque militar. 5. Incorporar las tecnologías digitales al producto, a los procesos, a las personas y al propio modelo de negocio. 6. Desarrollar planes de actuación con el cliente domestico a largo plazo. Estos planes tienen que estar basados en tres premisas: (i) deben incluir el ciclo de vida completo, desde la fase de investigación y desarrollo hasta la retirada del buque del servicio; (ii) la demanda debe ser sofisticada, es decir las exigencias del cliente, tanto desde la óptica de producto como de eficiencia, “tiran” del contratista y (iii) permitir el mantenimiento del nivel tecnológico y de las capacidades industriales de la compañía a futuro y posicionarla para que pueda competir en el mercado de exportación. 7. Impulsar el sector militar de exportación mediante una mayor actividad comercial a nivel internacional. 8. Fomentar la multilocalización ya que representa una oportunidad de crecimiento y favorece la exportación posibilitando el suministro de soluciones integrales en el país destino. 9. Reforzar la diplomacia institucional como palanca para la exportación. 10. Potenciar el liderazgo tecnológico tanto en producto como en procesos con políticas activas de I + D+ i. 11. Reforzar la capacidad de financiación con soluciones innovadoras. El segundo objetivo de esta tesis se centra en el desarrollo de soluciones integrales de Apoyo al Ciclo de Vida (ACV). La metodología planteada trata de minimizar la brecha entre capacidades y necesidades a lo largo de la vida operativa del barco. Es decir, el objetivo principal de los programas de ACV es que la unidad conserve durante toda su vida operativa, en términos relativos a las tecnologías existentes, las capacidades equivalentes a las que tendrá cuando entre en servicio. Los ejes de actuación para conseguir que un programa de Apoyo al Ciclo de Vida cumpla su objetivo son: el diseño orientado al valor, la ingeniería de Apoyo al Ciclo de Vida, los proyectos de refresco de tecnología, el mantenimiento Inteligente y los contratos basados en prestaciones. ABSTRACT On the first decades of the 21st century, organizations and administrations face challenges and come across opportunities threatened by a number of disruptive forces such as globalization, the ever-changing emerging technologies and the economic imbalances acting as drivers of the market transformation. This combination of factors is forcing all industrial companies to have more and higher demanding productivity levels, while bearing always in mind how to improve and meet the customer’s requirements. In this situation, we need to question ourselves again: Who is the customer? What does the customer value? And how can we deliver sustainable economic benefits? Considering this matter in a military naval industry framework sets the goals that this thesis intends to achieve. The first general goal is the definition of a new sustainable business model for the 2025 naval industry, adapted to the customer requirements and the new political, economic, social, technological and environmental scenario. And the second goal that arises as a consequence of the general model develops a methodology to implement “warship” through life support programs. The research is divided in four parts: the first one justifies, on the one hand, the need to change the existing model and, on the other, identifies the model structural factors. On the second part, current literature regarding one of the key issues on the new model (the Product-Service concept) is reviewed. Based on field research, the third part focuses entirely on military shipbuilding, analyzing specific key aspects of this field and identifying which of them are valued the most by Navies and how they manage through life cycles of warships. Finally, the foundation of the proposed model is presented and also the basic grounds for implementing a Through Life Support (TLS) program are developed. As a result of this research, the proposed model for the naval industry is based on eleven (11) key principles: 1. The warship (a high added value product) must be designed and built in a shipyard at the country developing the defense program. 2. Design must be customer value oriented, i.e.warship must be designed to effectively fulfill its mission throughout its operational life, ensuring safety at the ship and for the people and protecting the environment in accordance with current regulations. 3. The industry has to provide integrated Through Life Support solutions. 4. Develop and maintain integrated complex systems capabilities for the entire warship life cycle. 5. Introduce the product, processes, people and business model itself to digital technologies. 6. Develop long-term action plans with the domestic customer. These plans must be based on three premises: (i) the complete life cycle must be included, starting from the research and development stage throughout the ship’s disposal; (ii) customer demand has to be sophisticated, i.e. customer requirements, both from the efficiency and product perspective, "attract" the contractor and (iii) technological level and manufacturing capabilities of the company in the future must be maintained and a competitive position on the export market has to be achieved. 7. Promote the military exporting sector through increased international business. 8. Develop contractor multi-location as it entails an opportunity for growth and promote export opportunities providing integrated solutions in the customer's country. 9. Strengthen institutional diplomacy as a lever for export. 10. Promote technological leadership in both product and processes with active R & D & I policies (Research & Development & Innovation) 11. Strengthen financing capacity through innovative solutions. The second goal of this thesis is focused on developing integrated Through Life Support (TLS) solutions. The proposed methodology tries to minimize the gap between needs and capabilities through the ship operational life. It means, the main TLS program objective is to maintain the ship’s performance and capabilities during operational life, in relative terms to current technologies, equivalent to those the ship had when it entered service. The main actions to fulfill the TLS program objectives are: value-oriented design, TLS engineering, technology updating projects, intelligent maintenance and performance based contracts.
Resumo:
Uno de los temas más importantes dentro del debate contemporáneo, es el que se refiere a la sostenibilidad a largo plazo de la sociedad tal y como la entendemos hoy. El ser humano está recuperando la sensibilidad perdida que le concebía como una pieza más dentro del ciclo natural de la vida. Por fin hemos entendido que no podemos ser auto suficientes e independientes del entorno natural que nos rodea. Más allá del respeto y del cuidado, está abierta la puerta del conocimiento infinito que nos brinda la naturaleza a todos los niveles y a todas las escalas. Dentro de la disciplina arquitectónica han existido ejemplos como Antoni Gaudí o Frei Otto que han referenciado su obra en el mundo Natural, encontrando en él las estrategias y bases para el diseño arquitectónico. Sin embargo han sido una minoría dentro del enorme elenco de arquitectos defensores del ángulo recto. En las últimas décadas, la tendencia está cambiando. No nos referimos tanto a la sensibilidad creciente por conseguir una mayor eficiencia energética que ha llevado a una puesta en valor de la arquitectura vernácula, trasladando su sabiduría a las estrategias bioclimáticas. Nos referimos a un caso específico dentro del amplio abanico de formas arquitectónicas que han aparecido gracias a la incorporación de las herramientas computacionales en el diseño y la producción. Las arquitecturas que nos interesan son las que aprovechan estas técnicas para analizar e interpretar las estrategias complejas y altamente eficientes que encontramos en la naturaleza, y trasladarlas a la disciplina arquitectónica. Esta tendencia que se enmarca dentro de la Biomímesis o Biomimética es conocida con el nombre de Bioarquitectura. La presente tesis trata de morfología y sobre todo de morfogénesis. El término morfología se refiere al estudio de una forma concreta que nos permite entender un caso específico, nuestro foco de atención se centra sin embargo en la morfogénesis, es decir, en el estudio de los procesos de generación de esas formas, para poder reproducir patrones y generar abanicos de casos adaptables y reconfigurables. El hecho de estudiar la forma no quiere decir que ésta sea una tesis “formalista” con la connotación peyorativa y gestual que se le suele atribuir a este término. La investigación concibe el concepto de forma como lo hace el mundo natural: forma como síntesis de eficiencia. No hay ninguna forma natural gratuita, que no cumpla una función determinada y que no se desarrolle con el mínimo material y gaste la mínima energía posible. Este afán por encontrar la “forma eficaz” es lo que nos hace traspasar la frontera de la arquitectura formalista. El camino de investigación morfológica se traza, como el título de la tesis indica, siguiendo el hilo conductor concreto de los radiolarios. Estos microorganismos unicelulares poseen unos esqueletos tan complejos que para poder entender su morfología es necesario establecer un amplio recorrido que abarca más de 4.000 años de conocimiento humano. Desde el descubrimiento de los sólidos platónicos, poliedros que configuran muchas de las formas globales de estos esqueletos; hasta la aplicación de los algoritmos generativos, que permiten entender y reproducir los patrones de comportamiento que existen detrás de los sistemas de compactación y teselación irregular de los esqueletos radiolarios. La tesis no pretende plantear el problema desde un punto de vista biológico, ni paleontológico, aunque inevitablemente en el primer capítulo se realiza un análisis referenciado del estado del conocimiento científico actual. Sí se analizan en mayor profundidad cuestiones morfológicas y se tratan los diferentes posicionamientos desde los cuales estos microorganismos han servido de referencia en la disciplina arquitectónica. Además encontramos necesario analizar otros patrones naturales que comparten estrategias generativas con los esqueletos radiolarios. Como ya hemos apuntado, en el segundo capítulo se aborda un recorrido desde las geometrías más básicas a las más complejas, que tienen relación con las estrategias de generación de las formas detectadas en los microorganismos. A su vez, el análisis de estas geometrías se intercala con ejemplos de aplicaciones dentro de la arquitectura, el diseño y el arte. Finalizando con un cronograma que sintetiza y relaciona las tres vías de investigación abordadas: natural, geométrica y arquitectónica. Tras los dos capítulos centrales, el capítulo final recapitula las estrategias analizadas y aplica el conocimiento adquirido en la tesis, mediante la realización de diferentes prototipos que abarcan desde el dibujo analítico tradicional, a la fabricación digital y el diseño paramétrico, pasando por modelos analógicos de escayola, barras metálicas, resina, silicona, látex, etc. ABSTRACT One of the most important issues in the contemporary debate, is the one concerning the long-term sustainability of society as we understand it today. The human being is recovering the lost sensitivity that conceived us as part of the natural cycle of life. We have finally understood that we cannot be self-sufficient and independent of the natural environment which surrounds us. Beyond respect and care, we’ll find that the gateway to the infinite knowledge that nature provides us at all levels and at all scales is open. Within the architectural discipline, there have been remarkable examples such as Antoni Gaudí or Frei Otto who have inspired their work in the natural world. Both, found in nature the strategies and basis of their architectural designs. However, they have been a minority within the huge cast of architects defenders of the right angle. In recent decades, the trend is changing. We are not referring to the growing sensitivity in trying to achieve energy efficiency that has led to an enhancement of vernacular architecture, transferring its wisdom to bioclimatic strategies. We refer to a specific case within the wide range of architectural forms that have appeared thanks to the integration of computer tools in both design and production processes. We are interested in architectures that exploit these techniques to analyse and interpret the complex and highly efficient strategies found in nature, and shift them to the discipline of architecture. This trend, which is being implemented in the framework of the Biomimicry or biomimetics, is called Bioarchitecture. This thesis deals with morphology and more specifically with morphogenesis. Morphology is the study of a concrete form that allows us to understand a specific case. However, our focus is centered in morphogenesis or, in other words, the study of the processes of generation of these forms, in order to replicate patterns and generate a range of adaptable and reconfigurable cases. The fact of studying shapes does not mean that this is a “formalistic” thesis with the pejorative connotation that is often attributed to this term. This study conceives the concept of shape as Nature does: as a synthesis of efficiency. There is no meaningless form in nature. Furthermore, forms and shapes in nature play a particular role and are developed with minimum energetic consumption. This quest to find the efficient shape is what makes us go beyond formalistic architecture. The road of morphological investigation is traced, as the title of the thesis suggests, following the thread of radiolaria. These single-cell microorganisms possess very complex skeletons, so to be able to understand their morphology we must establish a wide spectrum which spans throughout more than 4.000 years of human knowledge. From the discovery of the platonic solids, polyhedrons which configure a huge range of global shapes of these skeletons, through the application of generative algorithms which allow us to understand and recreate the behavioral patterns behind the systems of compression and irregular tessellation of the radiolarian skeletons. The thesis does not pretend to lay out the problem from a biological, paleontological standpoint, although inevitably the first chapter is developed through an analysis in reference to the current state of the science. A deeper analysis of morphological aspects and different positionings is taken into account where these microorganisms have served as reference in the architectonic discipline. In addition we find necessary to analyse other natural patterns which share generative strategies with radiolarian skeletons. Aforementioned, in the second chapter an itinerary of the most basic geometries to the more complex ones is addressed. These are related, in this chapter, to the generative strategies of the shapes found in microorganisms. At the same time, the analysis of these geometries is placed among examples of applications inside the fields of architecture, design and the arts. To come to an end, a time chart synthesizes and relates the three investigation paths addressed: natural, geometrical and architectonic. After the two central chapters, the final chapter summarises the strategies analysed and applies the knowledge acquired throughout the thesis. This final chapter is shaped by the realization of different prototypes which range from traditional analytical drawings, to digital fabrication and parametric design, going through plaster analogical models, metal bars, resin, silicone, latex, etc.
Resumo:
Con la llegada de la era de la información, viendo esta era como la necesidad de informatizar, registrar y tratar una gran cantidad de datos mediante la tecnología, se está dando el paso de diversos procesos burocráticos a medios tecnológicos cambiando el papel por los datos almacenados en las computadoras. El DNI electrónico permite a un individuo identificarse mediante un dispositivo donde se almacenan los datos de éste para poder identificarse unívocamente ante aquellos trámites que antaño costaban largos procesos burocráticos en papel. Sabemos que las aplicaciones software son aquellos módulos formado por un conjunto de programas y rutinas que permiten a los diferentes tipos de computadores realizar tareas de manera parcial o totalmente automáticas. Por ello este proyecto demuestra todo el proceso de creación de un módulo software, que cómo comentamos en el primer párrafo, permitiría sobrellevar otros tantos procesos burocráticos como sería la petición del DNI y posterior escritura a mano en distintas situaciones. Todo ello orientado desde un estricto análisis desde el punto de vista de la ingeniería del software. ABSTRACT Due to the fact that we’re in the era of information technology, and from the perspective that this era means to computerize register and treat a big quantity of data through technologic means, we are stepping into a process where all the bureaucracy is being transferred from paper to digital storage models. Hence, the electronic DNI allows the citizen to identify himself univocally against processes that back in time where made through tedious and heavy-paper-work processes. We know that software apps are modules conformed by a set of instructions and programs that make possible the execution of partially or totally automated tasks. That’s why this project shows the process of the creation of a software module (app) that, as we stated before, would allow overcoming many other bureaucratic processes like the request to write down the national identification number. All of it focused to a strict analysis from software engineering’s point of view.
Resumo:
Natural ribozymes require metal ion cofactors that aid both in structural folding and in chemical catalysis. In contrast, many protein enzymes produce dramatic rate enhancements using only the chemical groups that are supplied by their constituent amino acids. This fact is widely viewed as the most important feature that makes protein a superior polymer for the construction of biological catalysts. Herein we report the in vitro selection of a catalytic DNA that uses histidine as an active component for an RNA cleavage reaction. An optimized deoxyribozyme from this selection requires l-histidine or a closely related analog to catalyze RNA phosphoester cleavage, producing a rate enhancement of ≈1-million-fold over the rate of substrate cleavage in the absence of enzyme. Kinetic analysis indicates that a DNA–histidine complex may perform a reaction that is analogous to the first step of the proposed catalytic mechanism of RNase A, in which the imidazole group of histidine serves as a general base catalyst. Similarly, ribozymes of the “RNA world” may have used amino acids and other small organic cofactors to expand their otherwise limited catalytic potential.
Resumo:
Cholesterol transport is an essential process in all multicellular organisms. In this study we applied two recently developed approaches to investigate the distribution and molecular mechanisms of cholesterol transport in Caenorhabditis elegans. The distribution of cholesterol in living worms was studied by imaging its fluorescent analog, dehydroergosterol, which we applied to the animals by feeding. Dehydroergosterol accumulates primarily in the pharynx, nerve ring, excretory gland cell, and gut of L1–L3 larvae. Later, the bulk of dehydroergosterol accumulates in oocytes and spermatozoa. Males display exceptionally strong labeling of spermatids, which suggests a possible role for cholesterol in sperm development. In a complementary approach, we used a photoactivatable cholesterol analog to identify cholesterol-binding proteins in C. elegans. Three major and several minor proteins were found specifically cross-linked to photocholesterol after UV irradiation. The major proteins were identified as vitellogenins. rme-2 mutants, which lack the vitellogenin receptor, fail to accumulate dehydroergosterol in oocytes and embryos and instead accumulate dehydroergosterol in the body cavity along with vitellogenin. Thus, uptake of cholesterol by C. elegans oocytes occurs via an endocytotic pathway involving yolk proteins. The pathway is a likely evolutionary ancestor of mammalian cholesterol transport.
Resumo:
The phosphorylation-dependent mechanisms regulating activation of the human neutrophil respiratory-burst enzyme, NADPH oxidase, have not been elucidated. We have shown that phosphatidic acid (PA) and diacylglycerol (DG), products of phospholipase activation, synergize to activate NADPH oxidase in a cell-free system. We now report that activation by PA plus DG involves protein kinase activity, unlike other cell-free system activators. NADPH oxidase activation by PA plus DG is reduced approximately 70% by several protein kinase inhibitors [1-(5-isoquinolinesulfonyl)piperazine, staurosporine, GF-109203X]. Similarly, depletion of ATP by dialysis reduces PA plus DG-mediated NADPH oxidase activation by approximately 70%. Addition of ATP, but not a nonhydrolyzable ATP analog, to the dialyzed system restores activation levels to normal. In contrast, these treatments have little effect on NADPH oxidase activation by arachidonic acid or SDS plus DG. PA plus DG induces the phosphorylation of a number of endogenous proteins. Phosphorylation is largely mediated by PA, not DG. A predominant substrate is p47-phox, a phosphoprotein component of NADPH oxidase. Phosphorylation of p47-phox precedes activation of NADPH oxidase and is markedly reduced by the protein kinase inhibitors. In contrast, arachidonic acid alone or SDS plus DG is a poor activator of protein phosphorylation in the cell-free system. Thus, PA induces activation of one or more protein kinases that regulate NADPH oxidase activation in a cell-free system. This cell-free system will be useful for identifying a functionally important PA-activated protein kinase(s) and for dissecting the phosphorylation-dependent mechanisms responsible for NADPH oxidase activation.
Resumo:
Research performed in this thesis studies the growth and evolution of advertising investment in digital media compared to traditional printed media, in particular daily newspapers. By means of advertising inserts in printed newspaper and online newspapers, the study’s objective was to confirm that digital media consolidate as an advertising platform. Throughout this study we used different quantitative and qualitative tools to demonstrate the starting hypothesis of this thesis. We used a combined methodology that included: a) joint analysis of printed newspaper “El Mundo” and online newspaper “elmundo.es” advertising content; b) analysis of advertising investment state-of-the-art; c) deep interviews to different media professionals, including advertisers, publicists, media planning, and researchers. A comparative study between the printed newspaper “El Mundo” and the online newspaper “elmundo.es” supported the analysis of contents in a realistic situation. Therefore a deep study of advertising inserts in both the printed and online version of “El Mundo” was performed. The main goal of this work was to show how advertising invesment is moving from traditional printed media to digital media The results of this study reveal that investment in digital media increases, meanwhile investment reduces in traditional media. At the same time, new reading habits along with technological innovation attracts readers to digital platforms. As a consequence, traditional printed newspapers have the immediate need to somehow engage online-newspaper readers into printed platforms. Hence, many advertisers already invest large sums of money in digital media, with marketing strategies and plans focused to online advertising campaigns in Internet. It is also true that in recent years printed and online media stopped “fighting”, which lead to the possibility of joint marketing and communication strategies between the aforementioned digital and online media. This research has different implications, one of them is that printed media seems to be a bussiness model which needs to be reinvented to adapt itself to the media’s current situation in order to be profitable and not to disappear. On the other hand, digital media must have a well-defined strategy with concrete and viable goals in online investment. Finally, it is important that new readers are taken into account by newspapers, this is: new readers mostly use digital platforms, thus interaction between readers and newspapers is much different than what it used to be when only printed newspapers existed. Media must go and be where the new reader is.
Resumo:
Illegitimate adolescent pregnancy creates a variety of problems, beginning with the difficult decision about whether or not to terminate the pregnancy. If the pregnancy is carried to term, choices follow regarding marriage or single parenthood and keeping or relinquishing the child. All of these choices involve consequences for the adolescent, many of them negative ones. This paper examines the problem of out-of-wedlock teen pregnancy and its possible psychological sources. It also introduces a method for analyzing the psychology of unwed teen pregnancy and childbearing and reviews the literature on the subject by this method. NOTE: Approvals page submitted to digital archive lacks signatures