109 resultados para OSI


Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONTEXTUALIZAÇÃO: A dor no ombro em profissionais de enfermagem pode acarretar limitação das atividades diárias e ocupacionais e interferir na qualidade de vida. OBJETIVO: Comparar o efeito da aplicação de dois programas fisioterapêuticos diferenciados pelos exercícios de propriocepção em trabalhadores de enfermagem com desordem do manguito rotador, segundo indicadores de qualidade de vida, satisfação no trabalho e intensidade da dor. MÉTODO: Trata-se de um estudo experimental, randomizado, prospectivo, comparativo, com análise quantitativa dos dados. A coleta de dados foi realizada no período de junho de 2010 a julho de 2011, por meio de um questionário sociodemográfico e profissional, questionário Western Ontario Rotador Cuff Index (WORC), Escala de Satisfação no Trabalho (Occupational Stress Indicator) e Escala Visual Numérica (EVN) para intensidade da dor. Após randomização, os sujeitos foram alocados em dois grupos. No Grupo 1 (controle), foram aplicados exercícios de alongamento, fortalecimento e crioterapia. No Grupo 2 (experimental), foram realizados os mesmos exercícios que no Grupo 1 acrescidos de exercícios proprioceptivos. Os dados foram analisados por meio do Statistical Package for the Social Science, versão 16.0 para Windows. RESULTADOS: Após os tratamentos fisioterapêuticos, houve melhora significativa da dor nos sujeitos dos dois grupos e da qualidade de vida nos trabalhadores do Grupo 2. Não houve alteração dos indicadores de satisfação no trabalho nos dois grupos. CONCLUSÕES: Os exercícios proprioceptivos foram importantes no tratamento dos distúrbios osteomusculares. No entanto, os resultados não permitiram inferir a melhor efetividade deles em relação ao outro tratamento, pois não houve diferença significativa entre os grupos. Ensaio clínico registrado no ClinicalTrials.gov NCT01465932.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A multidisciplinary study was carried out on the Late Quaternary-Holocene subsurface deposits of two Mediterranean coastal areas: Arno coastal plain (Northern Tyrrhenian Sea) and Modern Po Delta (Northern Adriatic Sea). Detailed facies analyses, including sedimentological and micropalaeontological (benthic foraminifers and ostracods) investigations, were performed on nine continuously-cored boreholes of variable depth (ca. from 30 meters to100 meters). Six cores were located in the Arno coastal plain and three cores in the Modern Po Delta. To provide an accurate chronological framework, twenty-four organic-rich samples were collected along the fossil successions for radiocarbon dating (AMS 14C). In order to reconstruct the depositional and palaeoenvironmental evolution of the study areas, core data were combined with selected well logs, provided by local companies, along several stratigraphic sections. These sections revealed the presence of a transgressive-regressive (T-R) sequence, composing of continental, coastal and shallow-marine deposits dated to the Late Pleistocene-Holocene period, beneath the Arno coastal plain and the Modern Po Delta. Above the alluvial deposits attributed to the last glacial period, the post-glacial transgressive succession (TST) consists of back-barrier, transgressive barrier and inner shelf deposits. Peak of transgression (MFS) took place around the Late-Middle Holocene transition and was identified by subtle micropalaeontological indicators within undifferentiated fine-grained deposits. Upward a thick prograding succession (HST) records the turnaround to regressive conditions that led to a rapid delta progradation in both study areas. Particularly, the outbuilding of modern-age Po Delta coincides with mud-belt formation during the late HST (ca. 600 cal yr BP), as evidenced by a fossil microfauna similar to the foraminiferal assemblage observed in the present Northern Adriatic mud-belt. A complex interaction between allocyclic and autocyclic factors controlled facies evolution during the highstand period. The presence of local parameters and the absence of a predominant factor prevent from discerning or quantifying consequences of the complex relationships between climate and deltaic evolution. On the contrary transgressive sedimentation seems to be mainly controlled by two allocyclic key factors, sea-level rise and climate variability, that minimized the effects of local parameters on coastal palaeoenvironments. TST depositional architecture recorded in both study areas reflects a well-known millennial-scale variability of sea-level rising trend and climate during the Late glacial-Holocene period. Repeated phases of backswamp development and infilling by crevasse processes (parasequences) were recorded in the subsurface of Modern Po Delta during the early stages of transgression (ca. 11,000-9,500 cal yr BP). In the Arno coastal plain the presence of a deep-incised valley system, probably formed at OSI 3/2 transition, led to the development of a thick (ca. 35-40 m) transgressive succession composed of coastal plain, bay-head delta and estuarine deposits dated to the Last glacial-Early Holocene period. Within the transgressive valley fill sequence, high-resolution facies analyses allowed the identification and lateral tracing of three parasequences of millennial duration. The parasequences, ca. 8-12 meters thick, are bounded by flooding surfaces and show a typical internal shallowing-upward trend evidenced by subtle micropalaeontological investigations. The vertical stacking pattern of parasequences shows a close affinity with the step-like sea-level rising trend occurred between 14,000-8,000 cal years BP. Episodes of rapid sea-level rise and subsequent stillstand phases were paralleled by changes in climatic conditions, as suggested by pollen analyses performed on a core drilled in the proximal section of the Arno palaeovalley (pollen analyses performed by Dr. Marianna Ricci Lucchi). Rapid shifts to warmer climate conditions accompanied episodes of rapid sea-level rise, in contrast stillstand phases occurred during temporary colder climate conditions. For the first time the palaeoclimatic signature of high frequency depositional cycles is clearly documented. Moreover, two of the three "regressive" pulsations, recorded at the top of parasequences by episodes of partial estuary infilling in the proximal and central portions of Arno palaeovalley, may be correlated with the most important cold events of the post-glacial period: Younger Dryas and 8,200 cal yr BP event. The stratigraphic and palaeoclimatic data of Arno coastal plain and Po Delta were compared with those reported for the most important deltaic and coastal systems in the worldwide literature. The depositional architecture of transgressive successions reflects the strong influence of millennial-scale eustatic and climatic variability on worldwide coastal sedimentation during the Late glacial-Holocene period (ca. 14,000-7,000 cal yr BP). The most complete and accurate record of high-frequency eustatic and climatic events are usually found within the transgressive succession of very high accommodation settings, such as incised-valley systems where exceptionally thick packages of Late glacial-Early Holocene deposits are preserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Foods that provide medical and health benefits or have a role in disease risk prevention are termed functional foods. The functionality of functional foods is derived from bioactive compounds that are extranutritional constituents present in small quantities in food. Bioactive components include a range of chemical compounds with varying structures such as carotenoids, flavonoids, plant sterols, omega-3 fatty acids (n-3), allyl and diallyl sulfides, indoles (benzopyrroles), and phenolic acids. The increasing consumer interest in natural bioactive compounds has brought about a rise in demand for these kinds of compounds and, in parallel, an increasing number of scientific studies have this type of substance as main topic. The principal aim of this PhD research project was the study of different bioactive and toxic compounds in several natural matrices. To achieve this goal, chromatographic, spectroscopic and sensorial analysis were performed. This manuscript reports the main results obtained in the six activities briefly summarized as follows: • SECTION I: the influence of conventional packaging on lipid oxidation of pasta was evaluated in egg spaghetti. • SECTION II: the effect of the storage at different temperatures of virgin olive oil was monitored by peroxide value, fatty acid activity, OSI test and sensory analysis. • SECTION III: the glucosinolate and phenolic content of 37 rocket salad accessions were evaluated, comparing Eruca sativa and Diplotaxis tenuifolia species. Sensory analysis and the influence of the phenolic and glucosinolate composition on sensory attributes of rocket salads has been also studied. • SECTION IV: ten buckwheat honeys were characterised on the basis of their pollen, physicochemical, phenolic and volatile composition. • SECTION V: the polyphenolic fraction, anthocyanins and other polar compounds, the antioxidant capacity and the anty-hyperlipemic action of the aqueous extract of Hibiscus sabdariffa were achieved. • SECTION VI: the optimization of a normal phase high pressure liquid chromatography–fluorescence detection method for the quantitation of flavanols and procyanidins in cocoa powder and chocolate samples was performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questa tesi ha l’obiettivo di comprendere e valutare se l’approccio al paradigma SDN, che verrà spiegato nel capitolo 1, può essere utilizzato efficacemente per implementare dei sistemi atti alla protezione e alla sicurezza di una rete più o meno estesa. Oltre ad introdurre il paradigma SDN con i relativi componenti basilari, si introduce il protocollo fondamentale OpenFlow, per la gestione dei vari componenti. Per ottenere l’obiettivo prestabilito, si sono seguiti alcuni passaggi preliminari. Primo tra tutti si è studiato cos’è l’SDN. Esso introduce una potenziale innovazione nell’utilizzo della rete. La combinazione tra la visione globale di tutta la rete e la programmabilità di essa, rende la gestione del traffico di rete un processo abbastanza complicato in termini di livello applicativo, ma con un risultato alquanto performante in termini di flessibilità. Le alterazioni all’architettura di rete introdotte da SDN devono essere valutate per garantire che la sicurezza di rete sia mantenuta. Le Software Defined Network (come vedremo nei primi capitoli) sono in grado di interagire attraverso tutti i livelli del modello ISO/OSI e questa loro caratteristica può creare problemi. Nelle reti odierne, quando si agisce in un ambiente “confinato”, è facile sia prevedere cosa potrebbe accadere, che riuscire a tracciare gli eventi meno facilmente rilevabili. Invece, quando si gestiscono più livelli, la situazione diventa molto più complessa perché si hanno più fattori da gestire, la variabilità dei casi possibili aumenta fortemente e diventa più complicato anche distinguere i casi leciti da quelli illeciti. Sulla base di queste complicazioni, ci si è chiesto se SDN abbia delle problematiche di sicurezza e come potrebbe essere usato per la sicurezza. Per rispondere a questo interrogativo si è fatta una revisione della letteratura a riguardo, indicando, nel capitolo 3, alcune delle soluzioni che sono state studiate. Successivamente si sono chiariti gli strumenti che vengono utilizzati per la creazione e la gestione di queste reti (capitolo 4) ed infine (capitolo 5) si è provato ad implementare un caso di studio per capire quali sono i problemi da affrontare a livello pratico. Successivamente verranno descritti tutti i passaggi individuati in maniera dettagliata ed alla fine si terranno alcune conclusioni sulla base dell’esperienza svolta.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questo documento si interroga sulle nuove possibilità offerte agli operatori del mondo delle Reti di Telecomunicazioni dai paradigmi di Network Functions Virtualization, Cloud Computing e Software Defined Networking: questi sono nuovi approcci che permettono la creazione di reti dinamiche e altamente programmabili, senza disdegnare troppo il lato prestazionale. L'intento finale è valutare se con un approccio di questo genere si possano implementare dinamicamente delle concatenazioni di servizi di rete e se le prestazioni finali rispecchiano ciò che viene teorizzato dai suddetti paradigmi. Tutto ciò viene valutato per cercare una soluzione efficace al problema dell'ossificazione di Internet: infatti le applicazioni di rete, dette middle-boxes, comportano costi elevati, situazioni di dipendenza dal vendor e staticità delle reti stesse, portando all'impossibilità per i providers di sviluppare nuovi servizi. Il caso di studio si basa proprio su una rete che implementa questi nuovi paradigmi: si farà infatti riferimento a due diverse topologie, una relativa al Livello L2 del modello OSI (cioè lo strato di collegamento) e una al Livello L3 (strato di rete). Le misure effettuate infine mostrano come le potenzialità teorizzate siano decisamente interessanti e innovative, aprendo un ventaglio di infinite possibilità per il futuro sviluppo di questo settore.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questo documento affronta le novità ed i vantaggi introdotti nel mondo delle reti di telecomunicazioni dai paradigmi di Software Defined Networking e Network Functions Virtualization, affrontandone prima gli aspetti teorici, per poi applicarne i concetti nella pratica, tramite casi di studio gradualmente più complessi. Tali innovazioni rappresentano un'evoluzione dell'architettura delle reti predisposte alla presenza di più utenti connessi alle risorse da esse offerte, trovando quindi applicazione soprattutto nell'emergente ambiente di Cloud Computing e realizzando in questo modo reti altamente dinamiche e programmabili, tramite la virtualizzazione dei servizi di rete richiesti per l'ottimizzazione dell'utilizzo di risorse. Motivo di tale lavoro è la ricerca di soluzioni ai problemi di staticità e dipendenza, dai fornitori dei nodi intermedi, della rete Internet, i maggiori ostacoli per lo sviluppo delle architetture Cloud. L'obiettivo principale dello studio presentato in questo documento è quello di valutare l'effettiva convenienza dell'applicazione di tali paradigmi nella creazione di reti, controllando in questo modo che le promesse di aumento di autonomia e dinamismo vengano rispettate. Tale scopo viene perseguito attraverso l'implementazione di entrambi i paradigmi SDN e NFV nelle sperimentazioni effettuate sulle reti di livello L2 ed L3 del modello OSI. Il risultato ottenuto da tali casi di studio è infine un'interessante conferma dei vantaggi presentati durante lo studio teorico delle innovazioni in analisi, rendendo esse una possibile soluzione futura alle problematiche attuali delle reti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the long time dynamics of a strong glass former, SiO2, below the glass transition temperature by averaging single-particle trajectories over time windows which comprise roughly 100 particle oscillations. The structure on this coarse-grained time scale is very well defined in terms of coordination numbers, allowing us to identify ill-coordinated atoms, which are called defects in the following. The most numerous defects are O-O neighbors, whose lifetimes are comparable to the equilibration time at low temperature. On the other hand, SiO and OSi defects are very rare and short lived. The lifetime of defects is found to be strongly temperature dependent, consistent with activated processes. Single-particle jumps give rise to local structural rearrangements. We show that in SiO2 these structural rearrangements are coupled to the creation or annihilation of defects, giving rise to very strong correlations of jumping atoms and defects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The awakening of national consciousness went hand in hand in Bohemia with an anxiety about national disappearance. In this context, the recourse to Pan-Slavism was for the Czechs a way to encourage themselves through the idea of belonging to a great Slavic world, while the Slavic Congress organized in Prague in 1848 was an attempt to realize this ideal. The Congress was a failure from the political point of view, but it did have some socio-cultural repercussions: notably, it served as a pretext for the advancement of women's issues in Bohemia. It is indeed in the wake of the Congress that Honorata z Wiśniowskich Zapová, a Polish women settled in Prague after her marriage to a Czech intellectual, founded, under the guise of collaboration between all Slavic women, the first women's association, as well as a (very short-lived) Czech-Polish institute, where Czech, as well as Polish girls, could get a quality education in their mother tongue. Honorata was undoubtedly the source of the polonophilia wind that seemed to blow over the Czech emancipation movement in the second half of the nineteenth century. In particular, Karolina Světlá showed in her Memoirs a great recognition for Honorata's efforts in matters of emancipation and education, and explicitly took up the challenge launched by the latter in founding another women's association and in inaugurating a school for underprivileged girls. But the tribute Světlá paid to Honorata is even more evident in her literary work, where Poland and the Polish woman (who often wears Honorata's features) play a significant role (see for example her short novel Sisters or her story A Few Days in the Life of a Prague Dandy). Světlá was probably the Czech feminist writer who, in her activities and in her work, relied most strongly on the Polish woman as a model for the Czech woman. However, she wasn't alone. In general, it was a characteristic of the Czech feminist movement of the second half of the nineteenth century to have recourse to the Polish woman and to Poland as a landmark for comparison and as a goal to be achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analysis for micro-molar concentrations of nitrate and nitrite, nitrite, phosphate, silicate and ammonia was undertaken on a SEAL Analytical UK Ltd, AA3 segmented flow autoanalyser following methods described by Kirkwood (1996). Samples were drawn from Niskin bottles on the CTD into 15ml polycarbonate centrifuge tubes and kept refrigerated at approximately 4oC until analysis, which generally commenced within 30 minutes. Overall 23 runs with 597 samples were analysed. This is a total of 502 CTD samples, 69 underway samples and 26 from other sources. An artificial seawater matrix (ASW) of 40g/litre sodium chloride was used as the inter-sample wash and standard matrix. The nutrient free status of this solution was checked by running Ocean Scientific International (OSI) low nutrient seawater (LNS) on every run. A single set of mixed standards were made up by diluting 5mM solutions made from weighed dried salts in 1litre of ASW into plastic 250ml volumetric flasks that had been cleaned by washing in MilliQ water (MQ). Data processing was undertaken using SEAL Analytical UK Ltd proprietary software (AACE 6.07) and was performed within a few hours of the run being finished. The sample time was 60 seconds and the wash time was 30 seconds. The lines were washed daily with wash solutions specific for each chemistry, but comprised of MQ, MQ and SDS, MQ and Triton-X, or MQ and Brij-35. Three times during the cruise the phosphate and silicate channels were washed with a weak sodium hypochlorite solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este proyecto de fin de carrera se realiza bajo la supervisión y aprobación de la empresa BT S.A.U. Este documento pretende ser un manual básico para dar a conocer al lector los elementos, herramientas y procedimientos para llevar a cabo un proyecto de tendido de fibra óptica en España. Se compone de 5 temas cuyos contenidos paso a resumir a continuación. Tema 1. En él, se muestra una explicación a las redes de acceso y las distintas topologías FTTx necesarias para llegar al cliente, además de las tecnologías y elementos utilizados para su conexión. Tema 2. Se explicará la regulación española para el tendido de redes de acceso necesarios como son la Regulación OBA, coubicación en las centrales de Telefónica, y la Regulación MARCO, compartición de la infraestructura de Telefónica con otros operadores. Tema 3. Aquí se explica la herramienta NEON, necesaria para la “comunicación” con Telefónica para las peticiones de la compartición de su infraestructura y un apartado para la legislación municipal para el despliegue de redes de fibra. Tema 4. Expondremos el procedimiento de trabajo en campo. Hablaremos de los procedimientos del tendido de cable, las obras civiles, y finalmente las medidas para comprobar el enlace. Tema 5. Explicaremos un ejemplo de despliegue real, desde la viabilidad hasta finalmente el tendido. Tema 6. Por último, expondré mis conclusiones y trabajos futuros. Este proyecto está orientado desde un punto de vista de la infraestructura (Capa Física, Modelo OSI). ABSTRACT. This thesis project is carried out under the supervision and approval of the company BT S.A.U. This document is intended as a basic manual to expose the reader items, tools, and procedures to carry out a deployment of optical fiber project in Spain. It is composed of 5 topics whose contents I summarize below. Topic 1. In it, shows an explanation to access networks and different topologies FTTx necessary to reach the customer, as well as technologies and elements used for connection. Topic 2. In this topic will be explained the Spanish regulation for the access networks needed such as the regulation of OBA, co-location in Telefonica’s central, and the framework regulation, sharing of Telefonica infrastructure with other operators. Topic 3. Here we explain the tool NEON, necessary for the "communication" with Telefónica for requests for infrastructure sharing, and a section for municipal legislation for the deployment of fiber networks. Topic 4. We will exhibit work in field procedure. We will discuss the procedures of laying cable, civil works, and finally measures to check the link. Topic 5. We will present an example of an actual deployment, from viability until finally laying. Topic 6. Finally, I shall explain my conclusions and future work. This project is from a point of view of the infrastructure (physical layer, OSI model).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las infraestructuras de telecomunicaciones son las que forman la capa física para la transmisión de la información de la que se componen las comunicaciones. Según el modelo OSI la capa física se encarga de convertir la trama que recibe (del nivel de enlace) en una serie de bits que envía a través del medio de transmisión correspondiente hacia el sistema destino, liberando a la capa superior de las funciones que imponga la naturaleza particular del medio de transmisión que se utilice. Para ello define las características mecánicas, eléctricas y funcionales de la interconexión al medio físico estableciendo además una interfaz con su capa superior (el nivel de enlace). Dependiendo del medio y el modo de transmisión así como de la topología de la red, el tipo de codificación y configuración de la línea y el tipo de comunicación deseada se requiere de un equipamiento u otro, por lo que la infraestructura de comunicaciones cambia. La complejidad de las redes de comunicaciones (multitud de servicios a multitud de destinos) hace que la gestión de la capa física (o de infraestructura) de las comunicaciones sea un reto difícil para los gestores de las telecomunicaciones en las empresas u organismos públicos. Ya que conseguir una correcta administración de las infraestructuras de telecomunicaciones es un factor clave para garantizar la calidad del servicio, optimizar los tiempos de provisión a los clientes y minimizar la indisponibilidad de la red ante incidencias. Si bien existen diferentes herramientas para la gestión de las telecomunicaciones la mayoría de estas soluciones contempla de manera limitada la capa física, dejando a los gestores con una multitud de aproximaciones, más o menos manuales, para entender y conocer qué pasa en su red a nivel físico y lo que puede ser aún más grave, sin la capacidad de reacción rápida ante la aparición de una incidencia. Para resolver este problema se hace necesaria la capacidad de gestión extremo a extremo de los circuitos y de todas sus conexiones intermedias. Esto es, se necesita implantar una metodología que modele la red de comunicaciones de manera que se pueda representar en un sistema informático y sobre él facilitar la gestión de los circuitos físicos y de sus infraestructuras asociadas. Por ello, la primera parte del proyecto consistirá en la descripción del tipo de infraestructura de telecomunicaciones a gestionar, el estudio de las soluciones actuales de gestión de red y el análisis de las estrategias que se están considerando para permitir la gestión de la capa física. La segunda parte estará dedicada a la definición de una metodología para la representación de la capa física en un sistema informático, de manera que se proporcione una solución completa a las organizaciones para la gestión eficaz de su infraestructura de telecomunicaciones. Y la tercera parte se centrará en la realización de un ejemplo real (piloto) de implantación de esta metodología para un proyecto concreto de una red de comunicaciones. Con objeto de mostrar las prestaciones de la solución propuesta. ABSTRACT. Telecommunications infrastructures have the physical layer component for the transfer of data. As defined in OSI model the physical layer performs the conversion of data received to binary digits which are sent through the transmission devices towards the target system, thus freeing the top layer from defining the functional specifics of each device used. This requires the full definition of the mechanical, electrical and functional features within the physical environment and the implementation of an interface with the top layer. Dependencies on the environment and the transmission modes as well as the network’s topology, the type of protocol and the line’s configuration and the type of communication selected provide specific requirements which define the equipment needed. This may also require changes in the communications environment. Current networks’ complexity (many different types of services to many nodes) demand an efficient management of the physical layer and the infrastructure in enterprises and the public sector agencies thus becoming a challenging task to the responsible for administering the telecommunications infrastructure which is key to provide high quality of service with the need to avoid any disruption of service. We have in the market different tools supporting telecommunications management but most of these solutions have limited functionality for the physical layer, leaving to administrators with the burden of executing manual tasks which need to be performed in order to attain the desired level of control which facilitates the decision process when incidents occur. An adequate solution requires an end to end capacity management of the circuits and all intermediate connections. We must implement a methodology to model the communications network to be able of representing an entire IT system to manage circuitry and associated infrastructure components. For the above purpose, the first part of the Project includes a complete description of the type of communications infrastructure to manage, the study of the current solutions available in network management and an analysis of the strategies in scope for managing the physical layer. The second part is dedicated to the definition of a methodology for the presentation of the physical layer in an IT system with the objective of providing a complete solution to the responsible staffs for efficiently managing a telecommunications infrastructure. The third part focuses on the deployment of a pilot using this methodology in a specific project performed on a communications network. Purpose is to show the deliverables of the proposed solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En la presente tesis se propone un modelo de gestión de redes que incorpora en su definición la gestión y difusión masiva de la información sobre redes no propietarias como Internet. Este modelo viene a solucionar uno de los problemas más comunes para cualquier aplicación, tanto de gestión como de usuario, proporcionando un mecanismo sencillo, intuitivo y normalizado para transferir grandes volúmenes de información, con independencia de su localización o los protocolos de acceso. Algunas de las características más notables del modelo se pueden sintetizar en las siguientes ideas. La creación de herramientas siguiendo el modelo permite que los desarrolladores de aplicaciones puedan centrarse en la lógica de la aplicación, en lugar de tener que resolver, generalmente a medida, el problema de la transferencia de grandes volúmenes de información. El modelo se ha concebido para ser compatible con los estándares existentes, tomando como base el marco de trabajo definido por ISO/IEC para sistemas OSI. Para que la difusión de la información se realice de forma escalable y con el menor impacto posible sobre la red de comunicaciones se ha propuesto un método de difusión fundamentando en técnicas colaborativas y de multicast. Las principales áreas de aplicaciones del modelo se encuentran en: los sistemas de gestión como las copias de seguridad en red, sistemas para la continuidad en el negocio y alta disponibilidad o los sistemas de mantenimiento de redes de computadoras; aplicaciones de usuario como las aplicaciones eBusiness, sistemas de compartición de archivos o sistemas multimedia; y, en general, cualquier tipo de aplicación que conlleve la transferencia de importantes volúmenes de información a través de redes de área amplia como Internet. Los principales resultados de la investigación son: un modelo de gestión de redes, un mecanismo de difusión de la información concretado en un protocolo de transporte y otro de usuario y una librería que implementa los protocolos propuestos. Para validar la propuesta se ha construido un escenario de pruebas real y se ha implementado el prototipo de un sistema de recuperación integral de nodos basado en el modelo y utilizando las librerías creadas. Las pruebas de escalabilidad, carga y tiempos de transferencia realizados sobre el prototipo han permitido verificar la validez e idoneidad de la propuesta comprobando que su aplicación es escalable respecto del tamaño del sistema gestionado, sencillo de implantar y compatible con los sistemas existentes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study was part of a larger scoping review and environmental scan conducted for Veterans Affairs Canada on the effects of operational stress injuries (OSIs) on the mental health and wellbeing of Veterans’ families. This paper focuses broadly on the relationships between combat (and/or deployment more generally), OSIs (primarily post-traumatic stress disorder (PTSD)), and the family. Based on the scoping review, the paper finds that existing research investigates the impacts of a Veteran’s OSI on the family, but also how various aspects of the family (such as family functioning, family support, etc.) can impact a Veteran living with an OSI.