900 resultados para end-to-side
Resumo:
El tema de la guerra civil española no puede comprenderse cabalmente sin estudiar tanto el clima social y político anterior al enfrentamiento bélico como el proceso de liquidación de este trascendente hecho histórico. El bando triunfante pretendió que el cese de las operaciones militares ponía fin a la guerra, pero ésta no terminó entonces para la mayoría de los españoles. Durante muchos años continuó el uso de la fuerza, en forma unilateral bajo la forma de una sangrienta represión en el orden interno y un largo exilio de los vencidos. Otros hechos son los sufrimientos morales y psicológicos impuestos, los perjuicios culturales ocasionados, el destierro y su repercusión sobre millones de hogares. También pesan los quebrantos económicos y sociales sobre el bienestar general Para apreciar la forma elegida por el régimen franquista para liquidar la guerra civil española resulta ilustrativo detallar algunos aspectos de la modalidad puesta en práctica, cuyo fracaso pone de manifiesto la mezquindad con que actuó el régimen dictatorial.
Resumo:
Silicoflagellate assemblages of ODP Leg 104 Neogene sequences are the basis of an interpretation of changes in the Neogene paleoenvironment of the Norwegian Sea. Fluctuations in the percentages of temperature and nutrient-sensitive taxonomic groups document major changes in sea-surface conditions. A brief, but distinct, cooling event occurred at 18.0-17.5 Ma which resulted in the disappearance of Naviculopsis. Following this early Miocene cooling a long period of increasing surface-water temperature occurred, leading up to a thermal high in the early middle Miocene (14.0 Ma). The early late Miocene (10.0-9.0 Ma) was distinctly cooler than the middle Miocene, but warmer than the remainder of the Neogene. Conditions between 13.0 and 10.0 Ma are unrecorded because of a regional hiatus, which is the earliest evidence for an end to the more temperate and stable conditions of the early to middle middle Miocene. A major plunge in temperatures occurred between 8.5 and 7.4 Ma and during the remainder of the late Miocene and Pliocene; from 7.4 to 2.65 Ma subpolar conditions prevailed. Silicoflagellates disappeared, except for sporadic occurrences, at 2.64 Ma with the beginning of dominant glacial sedimentation. Biogenic opal is absent in sediments younger than 0.76 Ma, indicating the dominance of glacial conditions with extensive sea ice.
Resumo:
This study tests the hypothesis that the late Miocene to early Pliocene constriction and closure of the Central American Seaway (CAS), connecting the tropical Atlantic and East quatorial Pacific (EEP), caused a decrease in productivity in the Caribbean, due to decreased coastal upwelling and an end to the connection with high-productivity tropical Pacific waters. The present study compared paleoceanographic proxies for the interval between 8.3 and 2.5 Ma in 47 samples from south Caribbean ODP Site 999 with published data on EEP DSDP Site 503. Proxies for Site 999 include the relative abundance of benthic foraminiferal species representing bottom current velocity and the flux of organic matter to the sea floor, the ratio of infaunal/epifaunal benthic foraminiferal species and benthic foraminifer accumulation rates (BFARs). In addition, we calculated % resistant planktic foraminifers species and used the previously published % sand fraction and benthic carbon isotope values from Site 999. During early shoaling of the Isthmus (8.3-7.9 Ma) the Caribbean was under mesotrophic conditions, with little ventilation of bottom waters and low current velocity. The pre-closure interval (7.6-4.2 Ma) saw enhanced seasonal input of phytodetritus with even more reduced ventilation, and enhanced dissolution between 6.8 and 4.8 Ma. During the post-closure interval (4.2-2.5 Ma) in the Caribbean, paleoproductivity decreased, current velocity was reduced, and ventilation improved, while the seasonality of phytodetrital input was reduced dramatically, coinciding with the establishment of the Atlantic-Pacific salinity contrast at 4.2 Ma. Our data support the hypothesis that late Miocene constriction of the CAS at 7.9 Ma and its closure at 4.2 Ma caused a gradual decrease in paleoproductivity in the Caribbean, consistent with decreased current velocity and seasonality of the phytodetrital input.
Resumo:
Introduction:Today, many countries, regardless of developed or developing, are trying to promote decentralization. According to Manor, as his quoting of Nickson’s argument, decentralization stems from the necessity to strengthen local governments as proxy of civil society to fill the yawning gap between the state and civil society (Manor [1999]: 30). With the end to the Cold War following the collapse of the Soviet Union rendering the cause of the “leadership of the central government to counter communism” meaningless, Manor points out, it has become increasingly difficult to respond flexibly to changes in society under the centralized system. Then, what benefits can be expected from the effectuation of decentralization? Litvack-Ahmad-Bird cited the four points: attainment of allocative efficiency in the face of different local preferences for local public goods; improvement to government competitiveness; realization of good governance; and enhancement of the legitimacy and sustainability of heterogeneous national states (Litvack, Ahmad & Bird [1998]: 5). They all contribute to reducing the economic and social costs of a central government unable to respond to changes in society and enhancing the efficiency of state administration through the delegation of authority to local governments. Why did Indonesia have a go at decentralization? As Maryanov recognizes, reasons for the implementation of decentralization in Indonesia have never been explicitly presented (Maryanov [1958]: 17). But there was strong momentum toward building a democratic state in Indonesia at the time of independence, and as indicated by provisions of Article 18 of the 1945 Constitution, there was the tendency in Indonesia from the beginning to debate decentralization in association with democratization. That said debate about democratization was fairly abstract and the main points are to ease the tensions, quiet the complaints, satisfy the political forces and thus stabilize the process of government (Maryanov [1958]: 26-27). What triggered decentralization in Indonesia in earnest, of course, was the collapse of the Soeharto regime in May 1998. The Soeharto regime, regarded as the epitome of the centralization of power, became incapable of effectively dealing with problems in administration of the state and development administration. Besides, the post-Soeharto era of “reform (reformasi)” demanded the complete wipeout of the Soeharto image. In contraposition to the centralization of power was decentralization. The Soeharto regime that ruled Indonesia for 32 years was established in 1966 under the banner of “anti-communism.” The end of the Cold War structure in the late 1980s undermined the legitimate reason the centralization of power to counter communism claimed by the Soeharto regime. The factor for decentralization cited by Manor is applicable here. Decentralization can be interpreted to mean not only the reversal of the centralized system of government due to its inability to respond to changes in society, as Manor points out, but also the participation of local governments in the process of the nation state building through the more positive transfer of power (democratic decentralization) and in the coordinated pursuit with the central government for a new shape of the state. However, it is also true that a variety of problems are gushing out in the process of implementing decentralization in Indonesia. This paper discusses the relationship between decentralization and the formation of the nation state with the awareness of the problems and issues described above. Section 1 retraces the history of decentralization by examining laws and regulations for local administration and how they were actually implemented or not. Section 2 focuses on the relationships among the central government, local governments, foreign companies and other actors in the play over the distribution of profits from exploitation of natural resources, and examines the process of the ulterior motives of these actors and the amplification of mistrust spawning intense conflicts that, in extreme cases, grew into separation and independence movements. Section 3 considers the merits and demerits at this stage of decentralization implemented since 2001 and shed light on the significance of decentralization in terms of the nation state building. Finally, Section 4 attempts to review decentralization as the “opportunity to learn by doing” for the central and local governments in the process of the nation state building. In the context of decentralization in Indonesia, deconcentration (dekonsentrasi), decentralization (desentralisasi) and support assignments (tugas pembantuan; medebewind, a Dutch word, was used previously) are defined as follows. Dekonsentrasi means that when the central government puts a local office of its own, or an outpost agency, in charge of implementing its service without delegating the administrative authority over this particular service. The outpost agency carries out the services as instructed by the central government. A head of a local government, when acting for the central government, gets involved in the process of dekonsentrasi. Desentralisasi, meanwhile, occurs when the central government cedes the administrative authority over a particular service to local governments. Under desentralisasi, local governments can undertake the particular service at their own discretion, and the central government, after the delegation of authority, cannot interfere with how local governments handle that service. Tugas pembantuan occur when the central government makes local governments or villages, or local governments make villages, undertake a particular service. In this case, the central government, or local governments, provides funding, equipment and materials necessary, and officials of local governments and villages undertake the service under the supervision and guidance of the central or local governments. Tugas pembantuan are maintained until local governments and villages become capable of undertaking that particular service on their own.
Resumo:
On September 3, 1954, Chinese artillery began shelling Quemoy (Jinmen), one of the Kuomintang-held offshore islands, setting off the first Taiwan Strait Crisis. This paper focuses on the crisis and analyzes the following three questions: (1) What was the policy the U.S. took towards the Republic of China (R.O.C), especially towards the offshore islands, to try to end the Taiwan Strait Crisis? (2) What were the intentions of the U.S. government in trying to end the Taiwan Strait Crisis? And (3) how should U.S. policy towards the R.O.C. which led to solving the Taiwan Strait Crisis be positioned in the history of Sino-American relations? Through analysis of these questions, this study concludes that the position the U.S. took to bring an end to crisis, one which prevented China from “liberating Taiwan” and the Kuomintang from “attacking the mainland,” brought about the existence of a de facto “two-China” situation.
Resumo:
Las limitaciones de las tecnologías de red actuales, identificadas en la Agencia de Proyectos de Investigación Avanzados para la Defensa (DARPA) durante 1995, han originado recientemente una propuesta de modelo de red denominado Redes Activas. En este modelo, los nodos proporcionan un entorno de ejecución sobre el que se ejecuta el código asociado a cada paquete. El objetivo es disponer de una tecnología de red que permita que nuevos servicios de red sean desarrollados e instalados rápidamente sin modificar los nodos de la red. Un servicio de red que se puede beneficiar de esta tecnología es la transmisión de datos en multipunto con diferentes grados fiabilidad. Las propuestas actuales de servicios de multipunto fiable proporcionan una solución específica para cada clase de aplicaciones, y los protocolos existentes extremo a extremo sufren de limitaciones técnicas relacionadas con una fiabilidad limitada, y con la ausencia de mecanismos de control de congestión efectivos. Esta tesis realiza propuestas originales conducentes a solucionar parte de las limitaciones actuales en el ámbito de Redes Activas y multipunto fiable con control de congestión. En primer lugar, se especificará un servicio genérico de multipunto fiable que, basándose en los requisitos de una serie de aplicaciones consideradas relevantes, proporcione diferentes clases de sesiones y grados de fiabilidad. Partiendo de la definición del servicio genérico especificado, se diseñará un protocolo de comunicaciones sobre la tecnología de Redes Activas que proporcione dicho servicio. El protocolo diseñado estará dotado de un mecanismo de control de congestión para que la fuente ajuste dinámicamente el tráfico inyectado a las condiciones de carga de la red. En esta tesis se pretende también profundizar en el estudio y análisis de la tecnología de Redes Activas, experimentando con dicha tecnología para proporcionar una realimentación a sus diseñadores. Dicha experimentación se realizará en tres ámbitos: el de los servicios y protocolos que puede soportar, el del modelo y arquitectura de las Redes Activas y el de las plataformas de ejecución disponibles. Como aportación adicional de este trabajo, se validarán los objetivos anteriores mediante una implementación piloto de las entidades de protocolo y de su interfaz de servicio sobre uno de los entornos de ejecución disponibles. Abstract The limitations of current networking technologies identified in the Defense Advance Research Projects Agency (DARPA) along 1995 have led to a recent proposal of a new network model called Active Networks. In this model, the nodes provide an execution environment over which the code used to process each packet is executed. The objective is a network technology that allows the fast design and deployment of new network services without requiring the modification of the network nodes. One network service that could benefit from this technology is the transmission of multicast data with different types of loss tolerance. The current proposals for reliable multicast services provide specific solutions for each application class, and existing end-to-end protocols suffer from technical drawbacks related to limited reliability and lack of an effective congestion control mechanism. This thesis contains original proposals that aim to solve part of the current drawbacks in the scope of Active Networks and reliable multicast with congestion control. Firstly, a generic reliable multicast network service will be specified. This service will be designed from the requirements of a relevant set of applications, and will provide different session classes and different types of reliability. Then, a network protocol based on Active Network technology will be designed such that it provides the specified network service. This protocol will incorporate a congestion control mechanism capable of performing an automatic adjustment of the traffic injected by the source to the available network capacity. This thesis will also contribute to a deeper study and analysis of Active Network technology, by experimenting with the technology in order to provide feedback to its designers. This experimentation will be done attending to three different scopes: support of Active Network for services and protocols, Active Network model and architecture, and currently available Active Network execution environments. As an additional contribution of this work, the previous objectives will be validated through a prototype implementation of the protocol entities and the service interface based on one of the current execution environments.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
The emerging use of real-time 3D-based multimedia applications imposes strict quality of service (QoS) requirements on both access and core networks. These requirements and their impact to provide end-to-end 3D videoconferencing services have been studied within the Spanish-funded VISION project, where different scenarios were implemented showing an agile stereoscopic video call that might be offered to the general public in the near future. In view of the requirements, we designed an integrated access and core converged network architecture which provides the requested QoS to end-to-end IP sessions. Novel functional blocks are proposed to control core optical networks, the functionality of the standard ones is redefined, and the signaling improved to better meet the requirements of future multimedia services. An experimental test-bed to assess the feasibility of the solution was also deployed. In such test-bed, set-up and release of end-to-end sessions meeting specific QoS requirements are shown and the impact of QoS degradation in terms of the user perceived quality degradation is quantified. In addition, scalability results show that the proposed signaling architecture is able to cope with large number of requests introducing almost negligible delay.
Resumo:
Purpose: In this work, we present the analysis, design and optimization of one experimental device recently developed in the UK, called the 'GP' Thrombus Aspiration Device (GPTAD). This device has been designed to remove blood clots without the need to make contact with the clot itself thereby potentially reducing the risk of problems such as downstream embolisation. Method: To obtain the minimum pressure necessary to extract the clot and to optimize the device, we have simulated the performance of the GPTAD analysing the resistances, compliances and inertances effects. We model a range of diameters for the GPTAD considering different forces of adhesion of the blood clot to the artery wall, and different lengths of blood clot. In each case we determine the optimum pressure required to extract the blood clot from the artery using the GPTAD, which is attached at its proximal end to a suction pump. Result: We then compare the results of our mathematical modelling to measurements made in laboratory using plastic tube models of arteries of comparable diameter. We use abattoir porcine blood clots that are extracted using the GPTAD. The suction pressures required for such clot extraction in the plastic tube models compare favourably with those predicted by the mathematical modelling. Discussion & Conclusion: We conclude therefore that the mathematical modelling is a useful technique in predicting the performance of the GPTAD and may potentially be used in optimising the design of the device.
Resumo:
En el presente proyecto se realiza un estudio para la construcción de una cabecera de televisión por cable. Se trata de un proyecto puramente teórico en el que se especifican cada una de las partes que forman una cabecera de televisión y cómo funciona cada una de ellas. En un principio, se sitúa la cabecera de televisión dentro de una plataforma general de transmisión, para indicar sus funciones. Posteriormente, se analizan las distintas tecnologías que implementan esta transmisión y los estándares DVB que las rigen, como son DVB-C y DVB-C2 para las transmisiones por cable propiamente dichas y DVB-IPTV para las transmisiones por IP, para elegir cuál de las opciones es la más acertada y adaptar la cabecera de televisión a la misma. En cuanto al desarrollo teórico de la cabecera, se estudia el proceso que sigue la señal dentro de la misma, desde la recepción de los canales hasta el envío de los mismos hacia los hogares de los distintos usuarios, pasando previamente por las etapas de codificación y multiplexación. Además, se especifican los equipos necesarios para el correcto funcionamiento de cada una de las etapas. En la recepción, se reciben los canales por cada uno de los medios posibles (satélite, cable, TDT y estudio), que son demodulados y decodificados por el receptor. A continuación, son codificados (en este proyecto en MPEG-2 o H.264) para posteriormente ser multiplexados. En la etapa de multiplexación, se forma una trama Transport Stream por cada canal, compuesta por su flujo de video, audio y datos. Estos datos se trata de una serie de tablas (SI y PSI) que guían al set-topbox del usuario en la decodificación de los programas (tablas PSI) y que proporcionan información de cada uno de los mismos y del sistema (tablas SI). Con estas últimas el decodificador forma la EPG. Posteriormente, se realiza una segunda multiplexación, de forma que se incluyen múltiples programas en una sola trama Transport Stream (MPTS). Estos MPTS son los flujos que les son enviados a cada uno de los usuarios. El mecanismo de transmisión es de dos tipos en función del contenido y los destinatarios: multicast o unicast. Por último, se especifica el funcionamiento básico de un sistema de acceso condicional, así como su estructura, el cual es imprescindible en todas las cabeceras para asegurar que cada usuario solo visualiza los contenidos contratados. In this project, a study is realized for the cable television head-end construction . It is a theoretical project in which there are specified each of the parts that form a television headend and how their works each of them. At first, the television head-end places inside a general platform of transmission, to indicate its functions. Later, the different technologies that implement this transmission and the standards DVB that govern them are analyzed, since the standards that govern the cable transmissions (DVB-C and DVB-C2) to the standard that govern the IP transmissions (DVB-IPTV), to choose which of the options is the most guessed right and to adapt the television head-end to the same one. The theoretical development of the head-end, there is studied the process that follows the sign inside the same one, from the receipt of the channels up to the sending of the same ones towards the homes of the different users, happening before for the stages of codification and multiplexación. In addition, there are specified the equipments necessary for the correct functioning of each one of the stages. In the reception, the channels are receiving for each of the possible systems(satellite, cable, TDT and study), and they are demodulated and decoded by the receiver. Later, they are codified (in this project in MPEG-2 or H.264). The next stage is the stage of multiplexing. In the multiplexing stage, the channels are packetized in Transport Stream, composed by his video flow, audio and information. The information are composed by many tables(SI and PSI). The PSI tables guide the set-top-box of the user in the programs decoding and the SI tables provide information about the programs and system. With the information mentioned the decoder forms the EPG. Later, a second multiplexación is realized, so that there includes multiple programs in an alone Transport Stream (MPTS). These MPTS are the flows that are sent to each of the users. Two types of transmission are possible: unicast (VoD channels) and multicast (live channels). Finally, the basic functioning of a conditional access system is specified and his structure too, which is indispensable in all the head-end to assure that every users visualizes the contracted contents only.
Resumo:
La termografía infrarroja (TI) es una técnica no invasiva y de bajo coste que permite, con el simple acto de tomar una fotografía, el registro sin contacto de la energía que irradia el cuerpo humano (Akimov & Son’kin, 2011, Merla et al., 2005, Ng et al., 2009, Costello et al., 2012, Hildebrandt et al., 2010). Esta técnica comenzó a utilizarse en el ámbito médico en los años 60, pero debido a los malos resultados como herramienta diagnóstica y la falta de protocolos estandarizados (Head & Elliot, 2002), ésta se dejó de utilizar en detrimento de otras técnicas más precisas a nivel diagnóstico. No obstante, las mejoras tecnológicas de la TI en los últimos años han hecho posible un resurgimiento de la misma (Jiang et al., 2005, Vainer et al., 2005, Cheng et al., 2009, Spalding et al., 2011, Skala et al., 2012), abriendo el camino a nuevas aplicaciones no sólo centradas en el uso diagnóstico. Entre las nuevas aplicaciones, destacamos las que se desarrollan en el ámbito de la actividad física y el deporte, donde recientemente se ha demostrado que los nuevos avances con imágenes de alta resolución pueden proporcionar información muy interesante sobre el complejo sistema de termorregulación humana (Hildebrandt et al., 2010). Entre las nuevas aplicaciones destacan: la cuantificación de la asimilación de la carga de trabajo físico (Čoh & Širok, 2007), la valoración de la condición física (Chudecka et al., 2010, 2012, Akimov et al., 2009, 2011, Merla et al., 2010), la prevención y seguimiento de lesiones (Hildebrandt et al., 2010, 2012, Badža et al., 2012, Gómez Carmona, 2012) e incluso la detección de agujetas (Al-Nakhli et al., 2012). Bajo estas circunstancias, se acusa cada vez más la necesidad de ampliar el conocimiento sobre los factores que influyen en la aplicación de la TI en los seres humanos, así como la descripción de la respuesta de la temperatura de la piel (TP) en condiciones normales, y bajo la influencia de los diferentes tipos de ejercicio. Por consiguiente, este estudio presenta en una primera parte una revisión bibliográfica sobre los factores que afectan al uso de la TI en los seres humanos y una propuesta de clasificación de los mismos. Hemos analizado la fiabilidad del software Termotracker, así como su reproducibilidad de la temperatura de la piel en sujetos jóvenes, sanos y con normopeso. Finalmente, se analizó la respuesta térmica de la piel antes de un entrenamiento de resistencia, velocidad y fuerza, inmediatamente después y durante un período de recuperación de 8 horas. En cuanto a la revisión bibliográfica, hemos propuesto una clasificación para organizar los factores en tres grupos principales: los factores ambientales, individuales y técnicos. El análisis y descripción de estas influencias deben representar la base de nuevas investigaciones con el fin de utilizar la TI en las mejores condiciones. En cuanto a la reproducibilidad, los resultados mostraron valores excelentes para imágenes consecutivas, aunque la reproducibilidad de la TP disminuyó ligeramente con imágenes separadas por 24 horas, sobre todo en las zonas con valores más fríos (es decir, zonas distales y articulaciones). Las asimetrías térmicas (que normalmente se utilizan para seguir la evolución de zonas sobrecargadas o lesionadas) también mostraron excelentes resultados pero, en este caso, con mejores valores para las articulaciones y el zonas centrales (es decir, rodillas, tobillos, dorsales y pectorales) que las Zonas de Interés (ZDI) con valores medios más calientes (como los muslos e isquiotibiales). Los resultados de fiabilidad del software Termotracker fueron excelentes en todas las condiciones y parámetros. En el caso del estudio sobre los efectos de los entrenamientos de la velocidad resistencia y fuerza en la TP, los resultados muestran respuestas específicas según el tipo de entrenamiento, zona de interés, el momento de la evaluación y la función de las zonas analizadas. Los resultados mostraron que la mayoría de las ZDI musculares se mantuvieron significativamente más calientes 8 horas después del entrenamiento, lo que indica que el efecto del ejercicio sobre la TP perdura por lo menos 8 horas en la mayoría de zonas analizadas. La TI podría ser útil para cuantificar la asimilación y recuperación física después de una carga física de trabajo. Estos resultados podrían ser muy útiles para entender mejor el complejo sistema de termorregulación humano, y por lo tanto, para utilizar la TI de una manera más objetiva, precisa y profesional con visos a mejorar las nuevas aplicaciones termográficas en el sector de la actividad física y el deporte Infrared Thermography (IRT) is a safe, non-invasive and low-cost technique that allows the rapid and non-contact recording of the irradiated energy released from the body (Akimov & Son’kin, 2011; Merla et al., 2005; Ng et al., 2009; Costello et al., 2012; Hildebrandt et al., 2010). It has been used since the early 1960’s, but due to poor results as diagnostic tool and a lack of methodological standards and quality assurance (Head et al., 2002), it was rejected from the medical field. Nevertheless, the technological improvements of IRT in the last years have made possible a resurgence of this technique (Jiang et al., 2005; Vainer et al., 2005; Cheng et al., 2009; Spalding et al., 2011; Skala et al., 2012), paving the way to new applications not only focused on the diagnose usages. Among the new applications, we highlighted those in physical activity and sport fields, where it has been recently proven that a high resolution thermal images can provide us with interesting information about the complex thermoregulation system of the body (Hildebrandt et al., 2010), information than can be used as: training workload quantification (Čoh & Širok, 2007), fitness and performance conditions (Chudecka et al., 2010, 2012; Akimov et al., 2009, 2011; Merla et al., 2010; Arfaoui et al., 2012), prevention and monitoring of injuries (Hildebrandt et al., 2010, 2012; Badža et al., 2012, Gómez Carmona, 2012) and even detection of Delayed Onset Muscle Soreness – DOMS- (Al-Nakhli et al., 2012). Under this context, there is a relevant necessity to broaden the knowledge about factors influencing the application of IRT on humans, and to better explore and describe the thermal response of Skin Temperature (Tsk) in normal conditions, and under the influence of different types of exercise. Consequently, this study presents a literature review about factors affecting the application of IRT on human beings and a classification proposal about them. We analysed the reliability of the software Termotracker®, and also its reproducibility of Tsk on young, healthy and normal weight subjects. Finally, we examined the Tsk thermal response before an endurance, speed and strength training, immediately after and during an 8-hour recovery period. Concerning the literature review, we proposed a classification to organise the factors into three main groups: environmental, individual and technical factors. Thus, better exploring and describing these influence factors should represent the basis of further investigations in order to use IRT in the best and optimal conditions to improve its accuracy and results. Regarding the reproducibility results, the outcomes showed excellent values for consecutive images, but the reproducibility of Tsk slightly decreased with time, above all in the colder Regions of Interest (ROI) (i.e. distal and joint areas). The side-to-side differences (ΔT) (normally used to follow the evolution of some injured or overloaded ROI) also showed highly accurate results, but in this case with better values for joints and central ROI (i.e. Knee, Ankles, Dorsal and Pectoral) than the hottest muscle ROI (as Thigh or Hamstrings). The reliability results of the IRT software Termotracker® were excellent in all conditions and parameters. In the part of the study about the effects on Tsk of aerobic, speed and strength training, the results of Tsk demonstrated specific responses depending on the type of training, ROI, moment of the assessment and the function of the considered ROI. The results showed that most of muscular ROI maintained warmer significant Tsk 8 hours after the training, indicating that the effect of exercise on Tsk last at least 8 hours in most of ROI, as well as IRT could help to quantify the recovery status of the athlete as workload assimilation indicator. Those results could be very useful to better understand the complex skin thermoregulation behaviour, and therefore, to use IRT in a more objective, accurate and professional way to improve the new IRT applications for the physical activity and sport sector.
Resumo:
En la última década ha aumentado en gran medida el interés por las redes móviles Ad Hoc. La naturaleza dinámica y sin infraestructura de estas redes, exige un nuevo conjunto de algoritmos y estrategias para proporcionar un servicio de comunicación fiable extremo a extremo. En el contexto de las redes móviles Ad Hoc, el encaminamiento surge como una de las áreas más interesantes para transmitir información desde una fuente hasta un destino, con la calidad de servicio de extremo a extremo. Debido a las restricciones inherentes a las redes móviles, los modelos de encaminamiento tradicionales sobre los que se fundamentan las redes fijas, no son aplicables a las redes móviles Ad Hoc. Como resultado, el encaminamiento en redes móviles Ad Hoc ha gozado de una gran atención durante los últimos años. Esto ha llevado al acrecentamiento de numerosos protocolos de encaminamiento, tratando de cubrir con cada uno de ellos las necesidades de los diferentes tipos de escenarios. En consecuencia, se hace imprescindible estudiar el comportamiento de estos protocolos bajo configuraciones de red variadas, con el fin de ofrecer un mejor encaminamiento respecto a los existentes. El presente trabajo de investigación muestra precisamente una solución de encaminamiento en las redes móviles Ad Hoc. Dicha solución se basa en el mejoramiento de un algoritmo de agrupamiento y la creación de un modelo de encaminamiento; es decir, un modelo que involucra la optimización de un protocolo de enrutamiento apoyado de un mecanismo de agrupación. El algoritmo mejorado, denominado GMWCA (Group Management Weighted Clustering Algorithm) y basado en el WCA (Weighted Clustering Algorithm), permite calcular el mejor número y tamaño de grupos en la red. Con esta mejora se evitan constantes reagrupaciones y que los jefes de clústeres tengan más tiempo de vida intra-clúster y por ende una estabilidad en la comunicación inter-clúster. En la tesis se detallan las ventajas de nuestro algoritmo en relación a otras propuestas bajo WCA. El protocolo de enrutamiento Ad Hoc propuesto, denominado QoS Group Cluster Based Routing Protocol (QoSG-CBRP), utiliza como estrategia el empleo de clúster y jerarquías apoyada en el algoritmo de agrupamiento. Cada clúster tiene un jefe de clúster (JC), quien administra la información de enrutamiento y la envía al destino cuando esta fuera de su área de cobertura. Para evitar que haya constantes reagrupamientos y llamados al algoritmo de agrupamiento se consideró agregarle un jefe de cluster de soporte (JCS), el que asume las funciones del JC, siempre y cuando este haya roto el enlace con los otros nodos comunes del clúster por razones de alejamiento o por desgaste de batería. Matemáticamente y a nivel de algoritmo se han demostrado las mejoras del modelo propuesto, el cual ha involucrado el mejoramiento a nivel de algoritmo de clustering y del protocolo de enrutamiento. El protocolo QoSG-CBRP, se ha implementado en la herramienta de simulación Network Simulator 2 (NS2), con la finalidad de ser comparado con el protocolo de enrutamiento jerárquico Cluster Based Routing Protocol (CBRP) y con un protocolo de enrutamiento Ad Hoc reactivo denominado Ad Hoc On Demand Distance Vector Routing (AODV). Estos protocolos fueron elegidos por ser los que mejor comportamiento presentaron dentro de sus categorías. Además de ofrecer un panorama general de los actuales protocolos de encaminamiento en redes Ad Hoc, este proyecto presenta un procedimiento integral para el análisis de capacidades de la propuesta del nuevo protocolo con respecto a otros, sobre redes que tienen un alto número de nodos. Estas prestaciones se miden en base al concepto de eficiencia de encaminamiento bajo parámetros de calidad de servicio (QoS), permitiendo establecer el camino más corto posible entre un nodo origen y un nodo destino. Con ese fin se han realizado simulaciones con diversos escenarios para responder a los objetivos de la tesis. La conclusiones derivadas del análisis de los resultados permiten evaluar cualitativamente las capacidades que presenta el protocolo dentro del modelo propuesto, al mismo tiempo que avizora un atractivo panorama en líneas futuras de investigación. ABSTRACT In the past decade, the interest in mobile Ad Hoc networks has greatly increased. The dynamic nature of these networks without infrastructure requires a new set of algorithms and strategies to provide a reliable end-to-end communication service. In the context of mobile Ad Hoc networks, routing emerges as one of the most interesting areas for transmitting information from a source to a destination, with the quality of service from end-to-end. Due to the constraints of mobile networks, traditional routing models that are based on fixed networks are not applicable to Ad Hoc mobile networks. As a result, the routing in mobile Ad Hoc networks has experienced great attention in recent years. This has led to the enhancement of many routing protocols, trying to cover with each one of them, the needs of different types of scenarios. Consequently, it is essential to study the behavior of these protocols under various network configurations, in order to provide a better routing scheme. Precisely, the present research shows a routing solution in mobile Ad Hoc networks. This solution is based on the improvement of a clustering algorithm, and the creation of a routing model, ie a model that involves optimizing a routing protocol with the support of a grouping mechanism. The improved algorithm called GMWCA (Group Management Weighted Clustering Algorithm) and based on the WCA (Weighted Clustering Algorithm), allows to calculate the best number and size of groups in the network. With this enhancement, constant regroupings are prevented and cluster heads are living longer intra-cluster lives and therefore stability in inter-cluster communication. The thesis details the advantages of our algorithm in relation to other proposals under WCA. The Ad Hoc routing protocol proposed, called QoS Group Cluster Based Routing Protocol (QoSG-CBRP), uses a cluster-employment strategy and hierarchies supported by the clustering algorithm. Each cluster has a cluster head (JC), who manages the routing information and sends it to the destination when is out of your coverage area. To avoid constant rearrangements and clustering algorithm calls, adding a support cluster head (JCS) was considered. The JCS assumes the role of the JC as long as JC has broken the link with the other nodes in the cluster for common restraining reasons or battery wear. Mathematically and at an algorithm level, the improvements of the proposed model have been showed, this has involved the improvement level clustering algorithm and the routing protocol. QoSG-CBRP protocol has been implemented in the simulation tool Network Simulator 2 (NS2), in order to be compared with the hierarchical routing protocol Cluster Based Routing Protocol (CBRP) and with the reactive routing protocol Ad Hoc On Demand Distance Vector Routing (AODV). These protocols were chosen because they showed the best individual performance in their categories. In addition to providing an overview of existing routing protocols in Ad Hoc networks, this project presents a comprehensive procedure for capacity analysis of the proposed new protocol with respect to others on networks that have a high number of nodes. These benefits are measured based on the concept of routing efficiency under the quality of service (QoS) parameters, thus allowing for the shortest possible path between a source node and a destination node. To meet the objectives of the thesis, simulations have been performed with different scenarios. The conclusions derived from the analysis of the results to assess qualitatively the protocol capabilities presented in the proposed model, while an attractive scenario for future research appears.
Resumo:
La Responsabilidad Social Corporativa (RSC) sigue constituyendo en la actualidad un área de estudio de elevado interés tanto para la comunidad académica como para los negocios en general. A pesar del gran número de investigaciones realizadas en las pasadas décadas sobre los distintos aspectos que la caracterizan, y la definición generalizada de políticas relacionadas en las compañías más importantes, existen todavía algunos asuntos clave sobre los que se plantean interrogantes fundamentales. La complejidad asociada al constructo RSC y su carácter intrínsecamente dinámico explican en parte esta afirmación. En su aplicación práctica, las dudas sobre la RSC se enfocan hoy en día hacia su implantación con carácter permanente en el día a día de las organizaciones, la relevancia estratégica de las principales iniciativas, o la posibilidad de obtención de beneficios a medio y largo plazo. Se observa de esta forma la traslación de los debates principales hacia las consecuencias más estratégicas de dichas políticas, influenciados por prestigiosos estudios académicos en los que se caracteriza la denominada RSC Estratégica (RSCE), y por las principales organizaciones de certificación de memorias anuales de RSC y sostenibilidad. En este contexto se sitúa el objeto principal de esta investigación, consistente en el diseño de un modelo de implantación de RSCE que permita no sólo identificar los factores más importantes a tener en consideración para su éxito, sino para caracterizar las potenciales formas de creación de valor que pueden surgir de la aplicación del mismo. Se argumenta la elección del tema por considerarse que los asuntos asociados a la RSC no están lo suficientemente explorados desde la visión estratégica más actual, y por constituir la creación de valor el objetivo más crítico dentro de los procesos directivos de planificación estratégica. De esta forma, se utilizan dos metodologías para destacar qué factores son esenciales en la implantación de la RSCE, con qué fines las compañías aplican esas políticas, y qué resultados obtienen como consecuencia: análisis comparativo de casos de estudio y análisis estadístico cuantitativo. Los casos de estudio analizan en profundidad políticas globales de RSCE bajo diferentes puntos de vista, para derivar conclusiones sobre los factores que facilitan u obstaculizan su implantación permanente en las organizaciones. Su desarrollo se estructura en torno a un marco conceptual de referencia obtenido a través de la revisión bibliográfica específica, y se complementa con la información primaria y secundaria de investigación. Por su parte, el análisis cuantitativo se desarrolla mediante tres técnicas exploratorias: estadística descriptiva, regresión múltiple y análisis de componentes principales. Su aplicación combinada va a posibilitar el contraste de aspectos destacados en los análisis de casos, así como la configuración final del modelo de implantación, y la expresión numérica de la creación de valor a través de la RSCE en función de las dimensiones estratégicas consideradas. En consecuencia, los resultados de la tesis se estructuran alrededor de tres preguntas de investigación: ¿cómo se están produciendo y qué caracterización presentan los beneficios que resultan como consecuencia de la implantación de la RSCE en los procesos de planificación estratégica de las compañías?, ¿qué factores esenciales y característicos de la RSCE pueden resultar críticos en los procesos de implantación y futuro desarrollo?, y ¿qué importancia puede tener en el medio y largo plazo el poder de decisión de compra de los consumidores y usuarios finales en la implantación y desarrollo de políticas de RSCE? ABSTRACT Corporate Social Responsibility (CSR) remains a study area of high interest today to both the academic community and businesses in general. Despite the large number of investigations of various aspects of CSR in past decades, and its generalized consideration by the world’s most important companies, there are still some key issues and fundamental questions to resolve. The complexity associated with the CSR construct and its inherently dynamic character, partly explains this statement. In its practical application, doubts about CSR arise today about its permanent implementation in normal business activities, the strategic relevance of related policies, and the possibility of making profits in the medium and long term. It is observed in this way the translation of the main debates towards the more strategic consequences of these policies, influenced by prestigious academic studies that characterize the so-called Strategic CSR (SCSR), and by leading certification agencies of CSR and sustainability reports. In this context, the main purpose of this investigation is to design a model of SCSR for implementation that allows one to not only identify the most important factors to consider for SCSR success, but also to characterize potential forms of value creation that can arise from its application. The selection of this research approach is justified because it is believed that important issues that are associated with CSR have not been sufficiently explored from the aspect of the strategic vision in the current context, and because value creation constitutes the most critical objective within the strategic planning steering processes. Thus, two methods are used to highlight which factors are essential in SCSR implementation processes, the end to which companies apply these policies, and the kind of results that they expect. These methods are: comparative analysis of case studies and quantitative statistical analysis. The case studies discuss in depth SCSR global policies under different perspectives to draw conclusions about the factors that facilitate or hinder permanent implantation in organizations. Their development is structured around a conceptual framework that is obtained by review of specific literature, and is complemented by primary and secondary research information. On the other hand, quantitative analysis is developed by means of three exploratory techniques: descriptive statistics, multiple regression and principal component analysis. Their combined application facilitates a contrast of highlighted aspects in analyzing cases, the final configuration of the implementation model, and the numerical expression of value creation by SCSR as a consequence of the strategic dimensions considered by companies. Finally, the results of the thesis are structured around three research questions: what are the benefits that result from the implementation of SCSR policies in companies’ strategic planning processes?, which essential SCSR factors are potentially critical in the implementation and future development of companies’ processes?, and how decisive in the medium and long term will be the purchase decision power of consumers to the success of SCSR policies?
Resumo:
The technical improvement and new applications of Infrared Thermography (IRT) with healthy subjects should be accompanied by results about the reproducibility of IRT measurements in different popula-tion groups. In addition, there is a remarkable necessity of a larger supply on software to analyze IRT images of human beings. Therefore, the objectives of this study were: firstly, to investigate the reproducibility of skin temperature (Tsk) on overweight and obese subjects using IRT in different Regions of Interest (ROI), moments and side-to-side differences (?T); and secondly, to check the reliability of a new software called Termotracker®, specialized on the analysis of IRT images of human beings. Methods: 22 overweight and obese males (11) and females (11) (age: 41,51±7,76 years; height: 1,65±0,09 m; weight: 82,41±11,81 Kg; BMI: 30,17±2,58 kg/m²) were assessed in two consecutive thermograms (5 seconds in-between) by the same observer, using an infrared camera (FLIR T335, Sweden) to get 4 IRT images from the whole body. 11 ROI were selected using Termotracker® to analyze its reproducibility and reliability through Intra-class Correlation Coefficient (ICC) and Coefficient of Variation (CV) values. Results: The reproducibility of the side-to-side differences (?T) between two consecutive thermograms was very high in all ROIs (Mean ICC = 0,989), and excellent between two computers (Mean ICC = 0,998). The re-liability of the software was very high in all the ROIs (Mean ICC = 0,999). Intraexaminer reliability analysing the same subjects in two consecutive thermograms was also very high (Mean ICC = 0,997). CV values of the different ROIs were around 2%. Conclusions: Skin temperature on overweight subjects had an excellent reproducibility for consecutive ther-mograms. The reproducibility of thermal asymmetries (?T) was also good but it had the influence of several factors that should be further investigated. Termotracker® reached excellent reliability results and it is a relia-ble and objective software to analyse IRT images of humans beings.
Resumo:
In this paper the main challenges associated with the migration process towards LTE, will be assessed. These challenges comprise, among others, the next key topics: Reliability, Availability Maintainability and Safety (RAMS) requirements, end to end Quality of Service (QoS) requirements, system performance in high speed scenarios, communication system deployment strategy, and system backward compatibility as well as the future system features for delivering railway services. The practical evaluation of the LTE system capabilities and performance in High Speed Railway (HSR) scenarios, require the development of an LTE demonstrator and an LTE system level simulator. Under this scope, the authors have developed an RF LTE demonstrator, as well as an LTE system level simulator, that will provide valuable information for the assessing of LTE performance and suitability in real HSR scenarios. This work is being developed under the framework of a research project to evaluate the feasibility of LTE to become the new railway communication system. The companies and universities involved in this project are: Technical University of Madrid (UPM), Alcatel Lucent Spain, ADIF (Spanish Railway Infrastructure Manager), Metro de Madrid, AT4 Wireless, the University of A Coruña (UDC) and University of Málaga (UMA).