242 resultados para TDD-LTE
Resumo:
La intensa evolución tecnológica que está experimentando nuestra sociedad en las últimas décadas hace que se estén desarrollando continuamente nuevas tecnologías que proporcionan mejoras tanto en la calidad como en la seguridad del servicio, este es el caso del 4G. A día de hoy, en España, la cuarta generación de comunicaciones móviles se ve encabezada por LTE, mientras que LTE-Advanced sólo se está implantando en las principales ciudades de nuestro país durante los últimos meses. Por este motivo, se ha creído interesante realizar una planificación sobre una zona que, hasta el momento, no está cubierta por cobertura LTE-Advanced. Además hay que tener en cuenta la naturaleza del terreno en el que trabajaremos, ya que se aleja del suelo urbano que encontramos en las principales ciudades con LTE-Advanced, como Madrid, Barcelona o Valencia. El estudio de esta zona semirural es de gran interés ya que uno de los objetivos de la cuarta generación es hacer llegar conexión a internet de calidad a lugares en los que no puede llegar la fibra óptica, como por ejemplo estas zonas semirurales. Para añadir aún más interés en el estudio, se ha decidido utilizar la banda de 800 MHz para el despliegue de la red. Esta banda que anteriormente era utilizada para la transmisión TDT, recientemente ha quedado liberada, en el conocido como Dividendo Digital para su uso en comunicaciones móviles. La tecnología LTE-Advanced se está empezando a desplegar en esta banda aunque realmente hasta Noviembre del año 2015 no tendremos un uso real de la misma, por lo que en estos momentos las redes 4G están utilizando la banda de 2.6 GHz. La utilización de la banda de 800 MHz conllevará mejoras tanto al usuario como a las operadoras, las cuales iremos viendo a lo largo del desarrollo del proyecto. La planificación pasará por distintas fases de optimización y expansión en las que se analizaran tanto la parte radioeléctrica como su capacidad. Se analizaran señales del tipo RSRP, RSSI o RSRQ y para el análisis de capacidad se definirá un conjunto de usuarios, distribuidos adecuadamente por toda la zona, que permitirá estudiar en detalle la capacidad de nuestra red. Para finalizar, se realizarán varias pruebas que demostrarán lo importante que es la tecnología MIMO tanto en LTE como en LTE-Advanced. ABSTRACT. Nowadays, our society is experiencing an intense pace of technological evolution which causes the constant development of new technologies. In the network planning area, these new technologies are focused on improving both quality and safety of service, with the recent deployment of 4G technologies in our networks. This project focuses on Spain, where the fourth generation of mobile communications is led by LTE, because LTE-Advanced has only been deployed in the largest cities, so far. The goal of this project is to plan, deploy and simulate LTE-Advanced network, of an area that hasn´t yet been covered. Furthermore, it will be taken into account the nature of the terrain where the network will be developed, as it moves away from urban areas in the major cities with LTE-Advanced, including Madrid, Barcelona and Valencia. The study of these semi-rural areas is extremely important because one of the main objectives of the fourth generation technologies is to get high-speed internet access to places that can be reached through other technologies, such as optical fiber. In order to adjust to the actual needs, the project was developed for the 800 MHz band. Those frequencies used to be assigned for digital terrestrial TV, but they have recently been released through the Digital Dividend in 2015 to use with mobile communications. That is the reason why, the LTE-Advanced technology in Spain is starting to be deployed in those frequencies. Despite the freeing of the 800 MHz band, it is not allowed to use it until November 2015, so 4G networks are currently using the 2.6 GHz band. The use of the 800 MHz band will led to advantages and improvements to users and operators, which will be detailed over the project. Each step of the planning of the 4G network is detailed. It is analyzed the optimization and expansion of the network, based on the radio and capacity premises. RSRP, RSSI or RSRQ signals were analyzed and an analysis of the network capacity was carried out. Finally, several tests are developed to show the importance of MIMO in LTE and LTE-Advanced.
Resumo:
No hay duda de que el ferrocarril es uno de los símbolos del avance tecnológico y social de la humanidad, y su imparable avance desde el primer tercio del Siglo XIX así lo atestigua. No obstante, a lo largo de gran parte de su historia se ha mostrado algo renuente a abrazar ciertas tecnologías, lo que le ha causado ser tachado de conservador. Sin embargo, en los últimos años, coincidiendo con el auge masivo de los trenes de alta velocidad, los metropolitanos y los tranvías, muchas tecnologías han ido penetrando en el mundo del ferrocarril. La que hoy nos ocupa es una de las que mayor valor añadido le ha proporcionado (y que probablemente le proporcionará también en el futuro): las comunicaciones móviles. Actualmente el uso de este tipo de tecnologías en el entorno ferroviario puede calificarse como de inicial o, por seguir la nomenclatura de las comunicaciones móviles públicas, de segunda generación. El GSM-R en las líneas de alta velocidad es un caso (aunque de éxito al fin y al cabo) que define perfectamente el estado del arte de las comunicaciones móviles en este entorno, ya que proporcionó un gran valor añadido a costa de un gran esfuerzo de estandarización; ha supuesto un importante salto adelante en el campo de la fiabilidad de este tipo de sistemas, aunque tiene unas grandes limitaciones de capacidad y graves problemas de escalabilidad. Todo hace pensar que en 2025 el sustituto de GSM-R deberá estar en el mercado. En cualquier caso, se debería abandonar la filosofía de crear productos de nicho, que son extraordinariamente caros, y abrazar las filosofías abiertas de las redes de comunicaciones públicas. Aquí es donde LTE, la última gran estrella de esta familia de estándares, puede aportar mucho valor. La idea subyacente detrás de esta Tesis es que LTE puede ser una tecnología que aporte gran valor a las necesidades actuales (y probablemente futuras) del sector del ferrocarril, no solamente en las líneas y trenes de alta velocidad, sino en las denominadas líneas convencionales y en los metros y tranvías. Dado que es un campo aún a día de hoy que dista bastante de estar completamente estudiado, se ha explorado la problemática de la propagación electromagnética en los diferentes entornos ferroviarios, como pueden ser los túneles de metro y la influencia de las estructuras de los trenes. En este sentido, se ha medido de forma bastante exhaustiva en ambos entornos. Por otro lado, dado que los sistemas multiantena son uno de los pilares fundamentales de los modernos sistemas de comunicaciones, se ha verificado de forma experimental la viabilidad de esta tecnología a la hora de implementar un sistema de comunicaciones trentierra en un túnel. Asimismo, de resultas de estas medidas, se ha comprobado la existencia de ciertos fenómenos físicos que pueden suponer una merma en la eficiencia de este tipo de sistemas. En tercer lugar, y dado que uno de los grandes desafíos de las líneas de alta velocidad está provocado por la gran celeridad a la que se desplazan los trenes, se ha explorado la influencia de este parámetro en la eficiencia global de una red completa de comunicaciones móviles. Por supuesto, se ha hecho especial hincapié en los aspectos relacionados con la gestión de la movilidad (traspasos o handovers Por último, a modo de cierre de la Tesis, se ha tratado de identificar los futuros servicios de comunicaciones que aportarán más valor a las explotaciones ferroviarias, así como los requisitos que supondrán para las redes de comunicaciones móviles. Para los casos antes enunciados (propagación, sistemas multiantena, movilidad y desafíos futuros) se proporcionan las contribuciones ya publicadas en revistas y congresos internacionales, así como las que están enviadas para su revisión. ABSTRACT There is almost no doubt that railways are one of the symbols of the technological and social progress of humanity. However, most of the time railways have been somewhat reluctant to embrace new technologies, gaining some reputation of being conservative. But in the last years, together with the massive boom of high speed lines, subways and trams all over the world, some technologies have broken through these conservative resistance. The one which concerns us now is one of the most value-added (both today and in the future): mobile communications. The state-of-the-art of these technologies in the railway field could be called as incipient, or (following the mobile communications’ notation) ‘second generation’. GSM-R, the best example of mobile communications in railways is a success story that shows perfectly the state-of-the-art of this field: it provided a noticeable mark-up but also required a great standardization effort; it also meant a huge step forward in the reliability of these systems but it also needs to face some scalability issues and some capacity problems. It looks more than feasible that in 2025 the alternative to GSM-R should be already available. Anyway, the vision here should be forgetting about expensive niche products, and embracing open standards like public mobile communications do. The main idea behind this Thesis is that LTE could be a technology that provides a lot of added value to the necessities of the railways of today and the future. And not only to highspeed lines, but also to the so-called conventional rail, subways and tramways. Due to the fact that even today, propagation in tunnels and influence of car bodies is far from being full-studied, we measured in a very exhaustive way the EM propagation in these two environments. Also, multiantenna systems are one of the basic foundations of the modern communications systems, so we experimentally verified the feasibility of using such a system in a train-towayside in a tunnel. Moreover, from the measurements carried out we proved the existence of some physical phenomena that could imply a decrease in the performance of these multiantenna systems. In third place, we have explored the influence of high-speed in the whole performance of the network, from the mobility management point-of-view. This high-speed movement is one of the most relevant challenges for the mobile communications networks. The emphasis was placed on the mobility aspects of the radio resource management. Finally, the Thesis closure is an identification of the future communication services that could provide a bigger addition of value to railways, and also the requirements that imply to mobile communications networks. For all the previous for scenarios depicted before (propagation, multiantenna systems, mobility and challenges) we provide some contributions already published (or submitted for revision or still in progress) on publications and international conferences.
Resumo:
Determinar con buena precisión la posición en la que se encuentra un terminal móvil, cuando éste se halla inmerso en un entorno de interior (centros comerciales, edificios de oficinas, aeropuertos, estaciones, túneles, etc), es el pilar básico sobre el que se sustentan un gran número de aplicaciones y servicios. Muchos de esos servicios se encuentran ya disponibles en entornos de exterior, aunque los entornos de interior se prestan a otros servicios específicos para ellos. Ese número, sin embargo, podría ser significativamente mayor de lo que actualmente es, si no fuera necesaria una costosa infraestructura para llevar a cabo el posicionamiento con la precisión adecuada a cada uno de los hipotéticos servicios. O, igualmente, si la citada infraestructura pudiera tener otros usos distintos, además del relacionado con el posicionamiento. La usabilidad de la misma infraestructura para otros fines distintos ofrecería la oportunidad de que la misma estuviera ya presente en las diferentes localizaciones, porque ha sido previamente desplegada para esos otros usos; o bien facilitaría su despliegue, porque el coste de esa operación ofreciera un mayor retorno de usabilidad para quien lo realiza. Las tecnologías inalámbricas de comunicaciones basadas en radiofrecuencia, ya en uso para las comunicaciones de voz y datos (móviles, WLAN, etc), cumplen el requisito anteriormente indicado y, por tanto, facilitarían el crecimiento de las aplicaciones y servicios basados en el posicionamiento, en el caso de poderse emplear para ello. Sin embargo, determinar la posición con el nivel de precisión adecuado mediante el uso de estas tecnologías, es un importante reto hoy en día. El presente trabajo pretende aportar avances significativos en este campo. A lo largo del mismo se llevará a cabo, en primer lugar, un estudio de los principales algoritmos y técnicas auxiliares de posicionamiento aplicables en entornos de interior. La revisión se centrará en aquellos que sean aptos tanto para tecnologías móviles de última generación como para entornos WLAN. Con ello, se pretende poner de relieve las ventajas e inconvenientes de cada uno de estos algoritmos, teniendo como motivación final su aplicabilidad tanto al mundo de las redes móviles 3G y 4G (en especial a las femtoceldas y small-cells LTE) como al indicado entorno WLAN; y teniendo siempre presente que el objetivo último es que vayan a ser usados en interiores. La principal conclusión de esa revisión es que las técnicas de triangulación, comúnmente empleadas para realizar la localización en entornos de exterior, se muestran inútiles en los entornos de interior, debido a efectos adversos propios de este tipo de entornos como la pérdida de visión directa o los caminos múltiples en el recorrido de la señal. Los métodos de huella radioeléctrica, más conocidos bajo el término inglés “fingerprinting”, que se basan en la comparación de los valores de potencia de señal que se están recibiendo en el momento de llevar a cabo el posicionamiento por un terminal móvil, frente a los valores registrados en un mapa radio de potencias, elaborado durante una fase inicial de calibración, aparecen como los mejores de entre los posibles para los escenarios de interior. Sin embargo, estos sistemas se ven también afectados por otros problemas, como por ejemplo los importantes trabajos a realizar para ponerlos en marcha, y la variabilidad del canal. Frente a ellos, en el presente trabajo se presentan dos contribuciones originales para mejorar los sistemas basados en los métodos fingerprinting. La primera de esas contribuciones describe un método para determinar, de manera sencilla, las características básicas del sistema a nivel del número de muestras necesarias para crear el mapa radio de la huella radioeléctrica de referencia, junto al número mínimo de emisores de radiofrecuencia que habrá que desplegar; todo ello, a partir de unos requerimientos iniciales relacionados con el error y la precisión buscados en el posicionamiento a realizar, a los que uniremos los datos correspondientes a las dimensiones y realidad física del entorno. De esa forma, se establecen unas pautas iniciales a la hora de dimensionar el sistema, y se combaten los efectos negativos que, sobre el coste o el rendimiento del sistema en su conjunto, son debidos a un despliegue ineficiente de los emisores de radiofrecuencia y de los puntos de captura de su huella. La segunda contribución incrementa la precisión resultante del sistema en tiempo real, gracias a una técnica de recalibración automática del mapa radio de potencias. Esta técnica tiene en cuenta las medidas reportadas continuamente por unos pocos puntos de referencia estáticos, estratégicamente distribuidos en el entorno, para recalcular y actualizar las potencias registradas en el mapa radio. Un beneficio adicional a nivel operativo de la citada técnica, es la prolongación del tiempo de usabilidad fiable del sistema, bajando la frecuencia en la que se requiere volver a capturar el mapa radio de potencias completo. Las mejoras anteriormente citadas serán de aplicación directa en la mejora de los mecanismos de posicionamiento en interiores basados en la infraestructura inalámbrica de comunicaciones de voz y datos. A partir de ahí, esa mejora será extensible y de aplicabilidad sobre los servicios de localización (conocimiento personal del lugar donde uno mismo se encuentra), monitorización (conocimiento por terceros del citado lugar) y seguimiento (monitorización prolongada en el tiempo), ya que todos ellas toman como base un correcto posicionamiento para un adecuado desempeño. ABSTRACT To find the position where a mobile is located with good accuracy, when it is immersed in an indoor environment (shopping centers, office buildings, airports, stations, tunnels, etc.), is the cornerstone on which a large number of applications and services are supported. Many of these services are already available in outdoor environments, although the indoor environments are suitable for other services that are specific for it. That number, however, could be significantly higher than now, if an expensive infrastructure were not required to perform the positioning service with adequate precision, for each one of the hypothetical services. Or, equally, whether that infrastructure may have other different uses beyond the ones associated with positioning. The usability of the same infrastructure for purposes other than positioning could give the opportunity of having it already available in the different locations, because it was previously deployed for these other uses; or facilitate its deployment, because the cost of that operation would offer a higher return on usability for the deployer. Wireless technologies based on radio communications, already in use for voice and data communications (mobile, WLAN, etc), meet the requirement of additional usability and, therefore, could facilitate the growth of applications and services based on positioning, in the case of being able to use it. However, determining the position with the appropriate degree of accuracy using these technologies is a major challenge today. This paper provides significant advances in this field. Along this work, a study about the main algorithms and auxiliar techniques related with indoor positioning will be initially carried out. The review will be focused in those that are suitable to be used with both last generation mobile technologies and WLAN environments. By doing this, it is tried to highlight the advantages and disadvantages of each one of these algorithms, having as final motivation their applicability both in the world of 3G and 4G mobile networks (especially in femtocells and small-cells of LTE) and in the WLAN world; and having always in mind that the final aim is to use it in indoor environments. The main conclusion of that review is that triangulation techniques, commonly used for localization in outdoor environments, are useless in indoor environments due to adverse effects of such environments as loss of sight or multipaths. Triangulation techniques used for external locations are useless due to adverse effects like the lack of line of sight or multipath. Fingerprinting methods, based on the comparison of Received Signal Strength values measured by the mobile phone with a radio map of RSSI Recorded during the calibration phase, arise as the best methods for indoor scenarios. However, these systems are also affected by other problems, for example the important load of tasks to be done to have the system ready to work, and the variability of the channel. In front of them, in this paper we present two original contributions to improve the fingerprinting methods based systems. The first one of these contributions describes a method for find, in a simple way, the basic characteristics of the system at the level of the number of samples needed to create the radio map inside the referenced fingerprint, and also by the minimum number of radio frequency emitters that are needed to be deployed; and both of them coming from some initial requirements for the system related to the error and accuracy in positioning wanted to have, which it will be joined the data corresponding to the dimensions and physical reality of the environment. Thus, some initial guidelines when dimensioning the system will be in place, and the negative effects into the cost or into the performance of the whole system, due to an inefficient deployment of the radio frequency emitters and of the radio map capture points, will be minimized. The second contribution increases the resulting accuracy of the system when working in real time, thanks to a technique of automatic recalibration of the power measurements stored in the radio map. This technique takes into account the continuous measures reported by a few static reference points, strategically distributed in the environment, to recalculate and update the measurements stored into the map radio. An additional benefit at operational level of such technique, is the extension of the reliable time of the system, decreasing the periodicity required to recapture the radio map within full measurements. The above mentioned improvements are directly applicable to improve indoor positioning mechanisms based on voice and data wireless communications infrastructure. From there, that improvement will be also extensible and applicable to location services (personal knowledge of the location where oneself is), monitoring (knowledge by other people of your location) and monitoring (prolonged monitoring over time) as all of them are based in a correct positioning for proper performance.
Resumo:
O objetivo deste estudo foi investigar os efeitos de dois modelos experimentais de dietas hipercalóricas em comportamentos de ansiedade, processos de aprendizagem e memória e alterações metabólicas. Os animais foram divididos em seis grupos experimentais, de acordo com a condição nutricional. 1) Controle (C); 2) Dieta de Cafeteria (DC); 3) Dieta Hiperlipídica (DH); 4) Controle AIN-93 (C/AIN-93); 5) Dieta de Cafeteria AIN-93 (DC/AIN-93), e 6) Dieta Hiperlipídica AIN-93 (DH/AIN-93). Posteriormente, os grupos foram subdivididos em dois grupos independentes, conforme a tarefa à qual foram submetidos. Pesagens foram realizadas semanalmente até os 98 dias de vida; foram verificados os pesos do fígado, do coração e o peso de tecido adiposo retroperitoneal e epididimal e foram realizadas dosagens de glicose, triglicérides, TGO e TGP no soro e gordura total, colesterol total e triglicérides no fígado. Os testes utilizados: Labirinto em T Elevado (LTE), Caixa Claro/Escuro e Labirinto Aquático de Morris (LAM). Os resultados de peso corporal, os dados comportamentais do LAM, do LTE e os dados de peso dos tecidos extraídos no dia do sacrifício e as análises bioquímicas foram submetidos a uma Análise de Variância (ANOVA). Quando apropriado, foi utilizado o teste de comparações múltiplas de Newman-Keuls (p< 0,05). Os dados comportamentais do teste claro/escuro foram submetidos ao teste t-Student (p< 0,05). Animais tratados com dieta hiperlipídica apresentaram maiores medidas de peso e ganho de peso comparados aos animais controle e dieta de cafeteria, tratados com pellet e com dieta AIN-93. Animais DH1, DC1, DH1 AIN-93, DH2 AIN-93 e DH2 apresentaram maior peso no dia do sacrifício. Animais DH1, DH1 AIN-93, DH2 e DH2 AIN-93 apresentaram maior acúmulo dos tecidos adiposos retroperitoneal e epididimal. Animais DH1 AIN-93 e DC2 AIN-93 apresentaram maiores níveis de glicose. Animais C2, DH2 e DC2 apresentaram maiores níveis de triglicérides. Animais DH1 e C1 apresentaram menores valores de TGO. Animais C2 e C2 AIN-93 apresentaram maiores níveis de TGO. Animais C1, DH1, C2 e DH2 apresentaram maiores níveis de TGP. Animais DH1 AIN-93, DH1, DH2 e DH2 AIN-93 apresentaram maiores valores de gordura total no fígado. Animais DH1 AIN-93 e DH2 apresentaram maiores níveis de colesterol no fígado. Animais DH1, DC1, DH2 e DH2 AIN-93 apresentaram maiores níveis de triglicérides no fígado. Com relação ao consumo alimentar, animais DH apresentaram maior consumo calórico e maior consumo lipídico quando comparados aos animais C e DC, com ração em pellet ou dieta AIN-93. Com relação ao LTE, não foram verificadas diferenças nas esquivas e na fuga. Animais DC1, DH1 e DH1 AIN-93 apresentaram menores níveis de ansiedade verificados a partir dos dados do teste da caixa claro-escuro. Animais DC2 AIN-93 apresentaram pior desempenho em tarefa de memória. Os dados obtidos a partir deste estudo demonstraram que as dietas utilizadas foram capazes de acarretar ganho de peso, acúmulo de tecido adiposo, alterações metabólicas, diminuição da ansiedade nos animais e pior desempenho em uma tarefa de memória em um dos grupos nutricionais.
Resumo:
Context. There is growing evidence that a treatment of binarity amongst OB stars is essential for a full theory of stellar evolution. However the binary properties of massive stars – frequency, mass ratio & orbital separation – are still poorly constrained. Aims. In order to address this shortcoming we have undertaken a multiepoch spectroscopic study of the stellar population of the young massive cluster Westerlund 1. In this paper we present an investigation into the nature of the dusty Wolf-Rayet star and candidate binary W239. Methods. To accomplish this we have utilised our spectroscopic data in conjunction with multi-year optical and near-IR photometric observations in order to search for binary signatures. Comparison of these data to synthetic non-LTE model atmosphere spectra were used to derive the fundamental properties of the WC9 primary. Results. We found W239 to have an orbital period of only ~5.05 days, making it one of the most compact WC binaries yet identified. Analysis of the long term near-IR lightcurve reveals a significant flare between 2004-6. We interpret this as evidence for a third massive stellar component in the system in a long period (>6 yr), eccentric orbit, with dust production occuring at periastron leading to the flare. The presence of a near-IR excess characteristic of hot (~1300 K) dust at every epoch is consistent with the expectation that the subset of persistent dust forming WC stars are short (<1 yr) period binaries, although confirmation will require further observations. Non-LTE model atmosphere analysis of the spectrum reveals the physical properties of the WC9 component to be fully consistent with other Galactic examples. Conclusions. The simultaneous presence of both short period Wolf-Rayet binaries and cool hypergiants within Wd 1 provides compelling evidence for a bifurcation in the post-Main Sequence evolution of massive stars due to binarity. Short period O+OB binaries will evolve directly to the Wolf-Rayet phase, either due to an episode of binary mediated mass loss – likely via case A mass transfer or a contact configuration – or via chemically homogenous evolution. Conversely, long period binaries and single stars will instead undergo a red loop across the HR diagram via a cool hypergiant phase. Future analysis of the full spectroscopic dataset for Wd 1 will constrain the proportion of massive stars experiencing each pathway; hence quantifying the importance of binarity in massive stellar evolution up to and beyond supernova and the resultant production of relativistic remnants.
Resumo:
Aims. Despite their importance to a number of astrophysical fields, the lifecycles of very massive stars are still poorly defined. In order to address this shortcoming, we present a detailed quantitative study of the physical properties of four early-B hypergiants (BHGs) of spectral type B1-4 Ia+; Cyg OB2 #12, ζ1 Sco, HD 190603 and BP Cru. These are combined with an analysis of their long-term spectroscopic and photometric behaviour in order to determine their evolutionary status. Methods. Quantitative analysis of UV–radio photometric and spectroscopic datasets was undertaken with a non-LTE model atmosphere code in order to derive physical parameters for comparison with apparently closely related objects, such as B supergiants (BSGs) and luminous blue variables (LBVs), and theoretical evolutionary predictions. Results. The long-term photospheric and spectroscopic datasets compiled for the early-B HGs revealed that they are remarkably stable over long periods ( ≥ 40 yrs), with the possible exception of ζ1 Sco prior to the 20th century; in contrast to the typical excursions that characterise LBVs. Quantitative analysis of ζ1 Sco, HD 190603 and BP Cru yielded physical properties intermediate between BSGs and LBVs; we therefore suggest that BHGs are the immediate descendants and progenitors (respectively) of such stars, for initial masses in the range ~30−60 M⊙. Comparison of the properties of ζ1 Sco with the stellar population of its host cluster/association NGC 6231/Sco OB1 provides further support for such an evolutionary scenario. In contrast, while the wind properties of Cyg OB2 #12 are consistent with this hypothesis, the combination of extreme luminosity and spectroscopic mass (~110 M⊙) and comparatively low temperature means it cannot be accommodated in such a scheme. Likewise, despite its co-location with several LBVs above the Humphreys-Davidson (HD) limit, the lack of long term variability and its unevolved chemistry apparently excludes such an identification. Since such massive stars are not expected to evolve to such cool temperatures, instead traversing an O4-6Ia → O4-6Ia+ → WN7-9ha pathway, the properties of Cyg OB2 #12 are therefore difficult to understand under current evolutionary paradigms. Finally, we note that as with AG Car in its cool phase, despite exceeding the HD limit, the properties of Cyg OB2 #12 imply that it lies below the Eddington limit – thus we conclude that the HD limit does not define a region of the HR diagram inherently inimical to the presence of massive stars.
Resumo:
Context. The first soft gamma-ray repeater was discovered over three decades ago, and was subsequently identified as a magnetar, a class of highly magnetised neutron star. It has been hypothesised that these stars power some of the brightest supernovae known, and that they may form the central engines of some long duration gamma-ray bursts. However there is currently no consenus on the formation channel(s) of these objects. Aims. The presence of a magnetar in the starburst cluster Westerlund 1 implies a progenitor with a mass ≥40 M⊙, which favours its formation in a binary that was disrupted at supernova. To test this hypothesis we conducted a search for the putative pre-SN companion. Methods. This was accomplished via a radial velocity survey to identify high-velocity runaways, with subsequent non-LTE model atmosphere analysis of the resultant candidate, Wd1-5. Results. Wd1-5 closely resembles the primaries in the short-period binaries, Wd1-13 and 44, suggesting a similar evolutionary history, although it currently appears single. It is overluminous for its spectroscopic mass and we find evidence of He- and N-enrichement, O-depletion, and critically C-enrichment, a combination of properties that is difficult to explain under single star evolutionary paradigms. We infer a pre-SN history for Wd1-5 which supposes an initial close binary comprising two stars of comparable (~ 41 M⊙ + 35 M⊙) masses. Efficient mass transfer from the initially more massive component leads to the mass-gainer evolving more rapidly, initiating luminous blue variable/common envelope evolution. Reverse, wind-driven mass transfer during its subsequent WC Wolf-Rayet phase leads to the carbon pollution of Wd1-5, before a type Ibc supernova disrupts the binary system. Under the assumption of a physical association between Wd1-5 and J1647-45, the secondary is identified as the magnetar progenitor; its common envelope evolutionary phase prevents spin-down of its core prior to SN and the seed magnetic field for the magnetar forms either in this phase or during the earlier episode of mass transfer in which it was spun-up. Conclusions. Our results suggest that binarity is a key ingredient in the formation of at least a subset of magnetars by preventing spin-down via core-coupling and potentially generating a seed magnetic field. The apparent formation of a magnetar in a Type Ibc supernova is consistent with recent suggestions that superluminous Type Ibc supernovae are powered by the rapid spin-down of these objects.
Resumo:
Mode of access: Internet.
Resumo:
Vol. 2, of the second part have imprint: Berlin, W. de Gruyter.
Resumo:
--t.I. Polemicas religiosas. Prologo del lte. Felix Romero; notas por Angel Pola.--t.II. Escritos politicos. Prologo por Angel Pola.--t.III. Letras y clencias. En peregrinacion, de Pomoca a Tepeji del rio, por Angel Pola y Aurelio J. Venegas; prologo del dr. Pornrio Parra.
Resumo:
In the Ventrobasal (VB) thalamus, astrocytes are known to elicit NMDA-receptor mediated slow inward currents (SICs) spontaneously in neurons. Fluorescence imaging of astrocytes and patch clamp recordings from the thalamocortical (TC) neurons in the VB of 6-23 day old Wistar rats were performed. TC neurons exhibit spontaneous SICs at low frequencies (~0.0015Hz) that were inhibited by NMDA-receptor antagonists D-AP5 (50µM), and were insensitive to TTX (1µM) suggesting a non-neuronal origin. The effect of corticothalamic (CT) and sensory (Sen) afferent stimulation on astrocyte signalling was assessed by varying stimulus parameters. Moderate synaptic stimulation elicited astrocytic Ca2+ increases, but did not affect the incidence of spontaneous SICs. Prolonged synaptic stimulation induced a 265% increase in SIC frequency. This increase lasted over one hour after the cessation of synaptic stimulation, so revealing a Long Term Enhancement (LTE) of astrocyte-neuron signalling. LTE induction required group I mGluR activation. LTE SICs targeted NMDA-receptors located at extrasynaptic sites. LTE showed a developmental profile: from weeks 1-3, the SIC frequency was increased by an average 50%, 240% and 750% respectively. Prolonged exposure to glutamate (200µM) increased spontaneous SIC frequency by 1800%. This “chemical” form of LTE was prevented by the broad-spectrum excitatory amino acid transporter (EAAT) inhibitor TBOA (300µM) suggesting that glutamate uptake was a critical factor. My results therefore show complex glutamatergic signalling interactions between astrocytes and neurons. Furthermore, two previously unrecognised mechanisms of enhancing SIC frequency are described. The synaptically induced LTE represents a form of non-synaptic plasticity and a glial “memory” of previous synaptic activity whilst enhancement after prolonged glutamate exposure may represent a pathological glial signalling mechanism.
Resumo:
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead of being another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. Several protocols that work over WMNs include IEEE 802.11a/b/g, 802.15, 802.16 and LTE-Advanced. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. In this paper, we have proposed a scheme to improve channel conditions by performing rate adaptation along with multiple packet transmission using packet loss and physical layer condition. Dynamic monitoring, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria provided greater throughput. The key feature of the proposed method is the combination of the following two factors: 1) detection of intrinsic channel conditions by measuring the fluctuation of noise to signal ratio via the standard deviation, and 2) the detection of packet loss induced through congestion. We have shown that the use of such techniques in a WMN can significantly improve performance in terms of the packet sending rate. The effectiveness of the proposed method was demonstrated in a simulated wireless network testbed via packet-level simulation.
Resumo:
The concern over the quality of delivering video streaming services in mobile wireless networks is addressed in this work. A framework that enhances the Quality of Experience (QoE) of end users through a quality driven resource allocation scheme is proposed. To play a key role, an objective no-reference quality metric, Pause Intensity (PI), is adopted to derive a resource allocation algorithm for video streaming. The framework is examined in the context of 3GPP Long Term Evolution (LTE) systems. The requirements and structure of the proposed PI-based framework are discussed, and results are compared with existing scheduling methods on fairness, efficiency and correlation (between the required and allocated data rates). Furthermore, it is shown that the proposed framework can produce a trade-off between the three parameters through the QoE-aware resource allocation process.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
2002 Mathematics Subject Classification: 62F35, 62F15.