900 resultados para End-to-side neurorrhaphy
Resumo:
The study aim was to determine whether using automated side loader (ASL) trucks in higher proportions compared to other types of trucks for residential waste collection results in lower injury rates (from all causes). The primary hypothesis was that the risk of injury to workers was lower for those who work with ASL trucks than for workers who work with other types of trucks used in residential waste collection. To test this hypothesis, data were collected from one of the nation’s largest companies in the solid waste management industry. Different local operating units (i.e. facilities) in the company used different types of trucks to varying degrees, which created a special opportunity to examine refuse collection injuries and illnesses and the risk reduction potential of ASL trucks.^ The study design was ecological and analyzed end-of-year data provided by the company for calendar year 2007. During 2007, there were a total of 345 facilities which provided residential services. Each facility represented one observation.^ The dependent variable – injury and illness rate, was defined as a facility’s total case incidence rate (TCIR) recorded in accordance with federal OSHA requirements for the year 2007. The TCIR is the rate of total recordable injury and illness cases per 100 full-time workers. The independent variable, percent of ASL trucks, was calculated by dividing the number of ASL trucks by the total number of residential trucks at each facility.^ Multiple linear regression models were estimated for the impact of the percent of ASL trucks on TCIR per facility. Adjusted analyses included three covariates: median number of hours worked per week for residential workers; median number of months of work experience for residential workers; and median age of residential workers. All analyses were performed with the statistical software, Stata IC (version 11.0).^ The analyses included three approaches to classifying exposure, percent of ASL trucks. The first approach included two levels of exposure: (1) 0% and (2) >0 - <100%. The second approach included three levels of exposure: (1) 0%, (2) ≥ 1 - < 100%, and (3) 100%. The third approach included six levels of exposure to improve detection of a dose-response relationship: (1) 0%, (2) 1 to <25%, (3) 25 to <50%, (4) 50 to <75%, (5) 75 to <100%, and (6) 100%. None of the relationships between injury and illness rate and percent ASL trucks exposure levels was statistically significant (i.e., p<0.05), even after adjustment for all three covariates.^ In summary, the present study shows that there is some risk reduction impact of ASL trucks but not statistically significant. The covariates demonstrated a varied yet more modest impact on the injury and illness rate but again, none of the relationships between injury and illness rate and the covariates were statistically significant (i.e., p<0.05). However, as an ecological study, the present study also has the limitations inherent in such designs and warrants replication in an individual level cohort design. Any stronger conclusions are not suggested.^
Resumo:
The Persian Gulf situated in the arid climate region of the northern hemisphere shows special conditions in its hydrochemistry. The high evaporation, the lack of large rivers, and the exclusion of deep water from the Indian Ocean governs the nutrient cycle. At 28 stations in the deeper part of the Persian Gulf (Iran side), in the Strait of Hormuz, and in the Gulf of Oman determinations of dissolved oxygen, dissolved inorganic phosphate, silicate, and pH were carried out. On 4 selected transverse profiles for phosphate, and dissolved oxygen and on 1 length profile for phosphate, silicate, oxygen, and pH the distribution of these components is shown and the in- and outflow is characterized. It is also pointed out that the nutrients on their way into the Persian Gulf are diminished and that temporary replenishment supply from a layer of about 100 m depth in the Indian Ocean follows. On one horizontal map the phosphate distribution in the surface and 30 m layer gives reference to biological activity. One diagram where nitrogen components are plotted against phosphate shows that nitrate is a limiting factor for productivity. O2/PO4-P and PO4-P/S? diagrams enable the different waterbodies and mixed layers to be characterized.
Resumo:
In central Antarctica, drainage today and earlier back to the Paleozoic radiates from the Gamburtsev Subglacial Mountains (GSM). Proximal to the GSM past the Permian-Triassic fluvial sandstones in the Prince Charles Mountains (PCM) are Cretaceous, Eocene, and Pleistocene sediment in Prydz Bay (ODP741, 1166, and 1167) and pre-Holocene sediment in AM04 beneath the Amery Ice Shelf. We analysed detrital zircons for U-Pb ages, Hf-isotope compositions, and trace elements to determine the age, rock type, source of the host magma, and "crustal" model age (T(C)DM). These samples, together with others downslope from the GSM and the Vostok Subglacial Highlands (VSH), define major clusters of detrital zircons interpreted as coming from (1) 700 to 460 Ma mafic granitoids and alkaline rock, epsilon-Hf 9 to -28, signifying derivation 2.5 to 1.3 Ga from fertile and recycled crust, and (2) 1200-900 Ma mafic granitoids and alkaline rock, epsilon-Hf 11 to -28, signifying derivation 1.8 to 1.3 Ga from fertile and recycled crust. Minor clusters extend to 3350 Ma. Similar detrital zircons in Permian-Triassic, Ordovician, Cambrian, and Neoproterozoic sandstones located along the PaleoPacific margin of East Antarctica and southeast Australia further downslope from central Antarctica reflect the upslope GSM-VSH nucleus of the central Antarctic provenance as a complex of 1200-900 Ma (Grenville) mafic granitoids and alkaline rocks and older rocks embedded in 700-460 Ma (Pan-Gondwanaland) fold belts. The wider central Antarctic provenance (CAP) is tentatively divided into a central sector with negative ?Hf in its 1200-900 Ma rocks bounded on either side by positive epsilon-Hf. The high ground of the GSM-VSH in the Permian and later to the present day is attributed to crustal shortening by far-field stress during the 320 Ma mid-Carboniferous collision of Gondwanaland and Laurussia. Earlier uplifts in the ~500 Ma Cambrian possibly followed the 700-500 Ma assembly of Gondwanaland, and in the Neoproterozoic the 1000-900 Ma collisional events in the Eastern Ghats-Rayner Province at the end of the 1300-1000 Ma assembly of Rodinia.
Resumo:
Late Cenozoic benthic foraminiferal faunas from the Caribbean Deep Sea Drilling Project (DSDP) Site 502 (3052 m) and East Pacific DSDP Site 503 (3572 m) were analyzed to interpret bottom-water masses and paleoceanographic changes occurring as the Isthmus of Panama emerged. Major changes during the past 7 Myr occur at 6.7-6.2, 3.4, 2.0, and 1.1 Ma in the Caribbean and 6.7-6.4, 4.0-3.2, 2.1, 1.4, and 0.7 Ma in the Pacific. Prior to 6.7 Ma, benthic foraminiferal faunas at both sites indicate the presence of Antarctic Bottom Water (AABW). After 6.7 Ma benthic foraminiferal faunas indicate a shift to warmer water masses: North Atlantic Deep Water (NADW) in the Caribbean and Pacific Deep Water (PDW) in the Pacific. Flow of NADW may have continued across the rising sill between the Caribbean and Pacific until 5.6 Ma when the Pacific benthic foraminiferal faunas suggest a decrease in bottom-water temperatures. After 5.6 Ma deep-water to intermediate-water flow across the sill appears to have stopped as the bottom-water masses on either side of the sill diverge. The second change recorded by benthic foraminiferal faunas occurs at 3.4 Ma in the Caribbean and 4.0-3.2 Ma in the Pacific. At this time the Caribbean is flooded with cold AABW, which is either gradually warmed or is replaced by Glacial Bottom Water (GBW) at 2.0 Ma and by NADW at 1.1 Ma. These changes are related to global climatic events and to the depth of the sill between the Caribbean and Atlantic rather than the rising Isthmus of Panama. Benthic foraminiferal faunas at East Pacific Site 503 indicate a gradual change from cold PDW to warmer PDW between 4.0 and 3.2 Ma. The PDW is replaced by the warmer, poorly oxygenated PIW at 2.1 Ma. Although the PDW affects the faunas during colder intervals between 1.4 and 0.7 Ma, the PIW remains the principal bottom-water mass in the Guatemala Basin of the East Pacific.
Resumo:
The study of glacier fronts combines different geomatics measurement techniques as the classic survey using total station or theodolite, technical GNSS (Global Navigation Satellite System), using laser-scanner or using photogrammetry (air or ground). The measure by direct methods (classical surveying and GNSS) is useful and fast when accessibility to the glaciers fronts is easy, while it is practically impossible to realize, in the case of glacier fronts that end up in the sea (tide water glaciers). In this paper, a methodology that combines photogrammetric methods and other techniques for lifting the front of the glacier Johnsons, inaccessible is studied. The images obtained from the front, come from a non-metric digital camera; its georeferencing to a global coordinate system is performed by measuring points GNSS support in accessible areas of the glacier front side and applying methods of direct intersection in inaccessible points of the front, taking measurements with theodolite. The result of observations obtained were applied to study the temporal evolution (1957-2014) of the position of the Johnsons glacier front and the position of the Argentina, Las Palmas and Sally Rocks lobes front (Hurd glacier).
Resumo:
Recently, broadcasted 3D video content has reached households with the first generation of 3DTV. However, few studies have been done to analyze the Quality of Experience (QoE) perceived by the end-users in this scenario. This paper studies the impact of trans- mission errors in 3DTV, considering that the video is delivered in side-by-side format over a conventional packet-based network. For this purpose, a novel evaluation methodology based on standard sin- gle stimulus methods and with the aim of keeping as close as pos- sible the home environment viewing conditions has been proposed. The effects of packet losses in monoscopic and stereoscopic videos are compared from the results of subjective assessment tests. Other aspects were also measured concerning 3D content as naturalness, sense of presence and visual fatigue. The results show that although the final perceived QoE is acceptable, some errors cause important binocular rivalry, and therefore, substantial visual discomfort.
Resumo:
We present and evaluate a compiler from Prolog (and extensions) to JavaScript which makes it possible to use (constraint) logic programming to develop the client side of web applications while being compliant with current industry standards. Targeting JavaScript makes (C)LP programs executable in virtually every modern computing device with no additional software requirements from the point of view of the user. In turn, the use of a very high-level language facilitates the development of high-quality, complex software. The compiler is a back end of the Ciao system and supports most of its features, including its module system and its rich language extension mechanism based on packages. We present an overview of the compilation process and a detailed description of the run-time system, including the support for modular compilation into separate JavaScript code. We demonstrate the maturity of the compiler by testing it with complex code such as a CLP(FD) library written in Prolog with attributed variables. Finally, we validate our proposal by measuring the performance of some LP and CLP(FD) benchmarks running on top of major JavaScript engines.
Resumo:
The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.
Resumo:
A 3-year Project financed by the European Commission is aimed at developing a universal system to de-orbit satellites at their end of life, as a fundamental contribution to limit the increase of debris in the Space environment. The operational system involves a conductive tapetether left bare to establish anodic contact with the ambient plasma as a giant Langmuir probe. The Project will size the three disparate dimensions of a tape for a selected de-orbit mission and determine scaling laws to allow system design for a general mission. Starting at the second year, mission selection is carried out while developing numerical codes to implement control laws on tether dynamics in/off the orbital plane; performing numerical simulations and plasma chamber measurements on tether-plasma interaction; and completing design of subsystems: electronejecting plasma contactor, power module, interface elements, deployment mechanism, and tether-tape/end-mass. This will be followed by subsystems manufacturing and by currentcollection, free-fall, and hypervelocity impact tests.
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
Since the Digital Agenda for Europe released the Europe2020 flagship, Member States are looking for ways of fulfilling their agreed commitments to fast and ultrafast internet deployment. However, Europe is not a homogenous reality. The economic, geographic, social and demographic features of each country make it a highly diverse region to develop best practices over Next Generation Access Networks (NGAN) deployments. There are special concerns about NGAN deployments for “the final third”, as referred to the last 25% of the country’s population who, usually, live in rural areas. This paper assesses, through a techno-economic analysis, the access cost of providing over 30 Mbps broadband for the final third of Spain`s population in municipalities, which are classified into area types, referred to as geotypes. Fixed and mobile technologies are compared in order to determine which is the most cost-effective technology for each geotype. The demographic limit for fixed networks (cable, fibre and copper) is also discussed. The assessment focuses on the supply side and the results show the access network cost only. The research completes a previous published assessment (Techno-economic analysis of next generation access networks roll-out. The case of platform competition, regulation and public policy in Spain) by including the LTE scenario. The LTE scenario is dimensioned to provide 30 Mbps (best effort) broadband, considering a network take-up of 25%. The Rocket techno-economic model is used to assess a ten-year study period deployment. Nevertheless, the deployment must start in 2014 and be completed by 2020, in order to fulfil the Digital Agenda’s goals. The feasibility of the deployment is defined as the ability to recoup the investment at the end of the study period. This ability is highly related to network take-up and, therefore, to service adoption. Network deployment in each geotype is compared with the cost of the deployment in the Urban geotype and broadband expected penetration rates for clarity and simplicity. Debating the cost-effective deployments for each geotype, while addressing the Digital Agenda’s goals regarding fast and ultrafast internet, is the main purpose of this paper. At the end of the last year, the independent Spanish regulation agency released the Spain broadband coverage report at the first half of 2013. This document claimed that 59% and 52% of Spain’s population was already covered by NGAN capable of providing 30 Mbps and 100 Mbps broadband respectively. HFC, with 47% of population coverage, and FTTH, with 14%, were considered as a 100 Mbps capable NGAN. Meanwhile VDSL, with 12% of the population covered, was the only NGAN network considered for the 30 Mbps segment. Despite not being an NGAN, the 99% population coverage of HSPA networks was also noted in the report. Since mobile operators are also required to provide 30 Mbps broadband to 90% of the population in rural areas by the end of 2020, mobile networks will play a significant role on the achievement of the 30 Mbps goal in Spain’s final third. The assessment indicates the cost of the deployment per cumulative households coverage with 4 different NGANs: FTTH, HFC, VDSL and LTE. Research shows that an investment ranging from €2,700 (VDSL) to €5,400 (HFC) million will be needed to cover the first half of the population with any fixed technology assessed. The results state that at least €3,000 million will be required to cover these areas with the least expensive technology (LTE). However, if we consider the throughput that fixed networks could provide and achievement of the Digital Agenda’s objectives, fixed network deployments are recommended for up to 90% of the population. Fibre and cable deployments could cover up to a maximum of 88% of the Spanish population cost efficiently. As there are some concerns about the service adoption, we recommend VDSL and mobile network deployments for the final third of the population. Despite LTE being able to provide the most economical roll-out, VDSL could also provide 50 Mbps from 75% to 90% of the Spanish population cost efficiently. For this population gap, facility based competition between VDSL providers and LTE providers must be encouraged. Regarding 90% to 98.5% of the Spanish population, LTE deployment is the most appropriate. Since costumers in less populated the municipalities are more sensitive to the cost of the service, we consider that a single network deployment could be most appropriate. Finally, it has become clear that it is not possible to deliver 30Mbps to the final 1.5% of the population cost-efficiently and adoption predictions are not optimistic either. As there are other broadband alternatives able to deliver up to 20 Mbps, in the authors’ opinion, it is not necessary to cover the extreme rural areas, where public financing would be required.
Resumo:
MIMO techniques allow increasing wireless channel performance by decreasing the BER and increasing the channel throughput and in consequence are included in current mobile communication standards. MIMO techniques are based on benefiting the existence of multipath in wireless communications and the application of appropriate signal processing techniques. The singular value decomposition (SVD) is a popular signal processing technique which, based on the perfect channel state information (PCSI) knowledge at both the transmitter and receiver sides, removes inter-antenna interferences and improves channel performance. Nevertheless, the proximity of the multiple antennas at each front-end produces the so called antennas correlation effect due to the similarity of the various physical paths. In consequence, antennas correlation drops the MIMO channel performance. This investigation focuses on the analysis of a MIMO channel under transmitter-side antennas correlation conditions. First, antennas correlation is analyzed and characterized by the correlation coefficients. The analysis describes the relation between antennas correlation and the appearance of predominant layers which significantly affect the channel performance. Then, based on the SVD, pre- and post-processing is applied to remove inter-antenna interferences. Finally, bit- and power allocation strategies are applied to reach the best performance. The resulting BER reveals that antennas correlation effect diminishes the channel performance and that not necessarily all MIMO layers must be activated to obtain the best performance.
Resumo:
Advanced glycation end products (AGEs) are thought to contribute to the abnormal lipoprotein profiles and increased risk of cardiovascular disease of patients with diabetes and renal failure, in part by preventing apolipoprotein B (apoB)-mediated cellular uptake of low density lipoproteins (LDL) by LDL receptors (LDLr). It has been proposed that AGE modification at one site in apoB, almost 1,800 residues from the putative apoB LDLr-binding domain, may be sufficient to induce an apoB conformational change that prevents binding to the LDLr. To further explore this hypothesis, we used 29 anti-human apoB mAbs to identify other potential sites on apoB that may be modified by in vitro advanced glycation of LDL. Glycation of LDL caused a time-dependent decrease in its ability to bind to the LDLr and in the immunoreactivity of six distinct apoB epitopes, including two that flank the apoB LDLr-binding domain. ApoB appears to be modified at multiple sites by these criteria, as the loss of glycation-sensitive epitopes was detected on both native glycated LDL and denatured, delipidated glycated apoB. Moreover, residues directly within the putative apoB LDLr-binding site are not apparently modified in glycated LDL. We propose that the inability of LDL modified by AGEs to bind to the LDLr is caused by modification of residues adjacent to the putative LDLr-binding site that were undetected by previous immunochemical studies. AGE modification either eliminates the direct participation of the residues in LDLr binding or indirectly alters the conformation of the apoB LDLr-binding site.
Resumo:
In the yeast Saccharomyces cerevisiae, microtubules are organized by the spindle pole body (SPB), which is embedded in the nuclear envelope. Microtubule organization requires the γ-tubulin complex containing the γ-tubulin Tub4p, Spc98p, and Spc97p. The Tub4p complex is associated with cytoplasmic and nuclear substructures of the SPB, which organize the cytoplasmic and nuclear microtubules. Here we present evidence that the Tub4p complex assembles in the cytoplasm and then either binds to the cytoplasmic side of the SPB or is imported into the nucleus followed by binding to the nuclear side of the SPB. Nuclear import of the Tub4p complex is mediated by the essential nuclear localization sequence of Spc98p. Our studies also indicate that Spc98p in the Tub4p complex is phosphorylated at the nuclear, but not at the cytoplasmic, side of the SPB. This phosphorylation is cell cycle dependent and occurs after SPB duplication and nucleation of microtubules by the new SPB and therefore may have a role in mitotic spindle function. In addition, activation of the mitotic checkpoint stimulates Spc98p phosphorylation. The kinase Mps1p, which functions in SPB duplication and mitotic checkpoint control, seems to be involved in Spc98p phosphorylation. Our results also suggest that the nuclear and cytoplasmic Tub4p complexes are regulated differently.