905 resultados para Ethernet (Local area network system)
Resumo:
Este proyecto consiste en el diseño completo, de una red de distribución de TDT, a nivel local, mediante difusión SFN, Single Frequency Network. Este tipo de difusión, tiene la capacidad de difundir los servicios de televisión en una única frecuencia, cubriendo un área, ya sea local o estatal, aprovechando en las zonas de interferencia los rebotes de la señal y así evitar el uso de una frecuencia distinta por cada centro de emisión, todos los que componen un área de cobertura. Para el diseño de la red, se ha optado por diseñar una red IP, mediante distribución multicast, ya que esta es la tecnología imperante a día de hoy, quedando obsoleta ya, la distribución analógica, ya que consume muchos más recursos y por consiguiente mucho más costosa de implementar. El documento se divide en cuatro capítulos. En el primer capítulo se realizará una introducción teórica a las redes de distribución SFN, centrándose en el cálculo de los retardos, punto fundamental en el diseño de este tipo de redes. Se continuará unas nociones básicas de redes IP y el protocolo multicast, en el que se basa el trasporte de la señal. El capítulo dos, se centra en el diseño de la red, desde los centros de producción, donde se generan los programas a emitir, hasta los diferentes centros de difusión que cubrirán todo el área de cobertura requerida, pasando por el centro de multiplexación, donde se sitúa la cabecera que compondrá el múltiplex a difundir. Se describirán los equipos y el diseño de los distintos centros que conforman la red, centros de producción, multiplexación y difusión. A demás se realizará el cálculo de retardo de la señal, necesario en este tipo de redes. Se continuará con el capítulo tres, donde se describirá la configuración de la red, tanto a nivel de equipamiento, como el diseño y asignación IP de toda la red, separando la red de servicio de la red de gestión para una mayor confiabilidad y eficiencia de la red. Se finalizará con la descripción de la gestión de la red, que mediante diferentes herramientas, proporcionan un monitoreado en tiempo real de todo el sistema, dando la posibilidad de adelantarsey previniendo posibles incidencias que, puedan causar alguna deficiencia en el servicio que se entrega al usuario final. ABSTRACT. This project involves the complete design of a network´s TDT distribution, locally, by broadcast SFN (Single Frequency Network). This type of broadcast, has the ability to broadcast television´s services on a single frequency, covering an area, whether local or state, drawing on the interference zones, signal´s rebounds, to avoid the use of a different frequency each broadcast center, all those who make a coverage area. For the design of the network, has been chosen to design an IP network using multicast distribution, since this is the prevailing technology today, as the analogue distribution, consumes more resources and therefore, much more costly to implement. The document is divided into four chapters. In the first chapter you can find a theoretical introduction to SFN distribution networks, focusing on the calculation of delays, fundamental point, in the design of these networks. A basic understanding of IP networks and the multicast protocol, in which the transport of the signal is based, will continue. Chapter two focuses on the design of the network, from production centers, where the programs are created to broadcast, to different distribution centers covering the entire area of coverage required, through the multiplexing center, where the head is located, which comprise the multiplex. Also, the equipment and design of the various centers in the network, production centers, multiplexing center and distribution centers, are described. Furthermore, the calculation of signal delay, necessary in such networks, is performed. We will continue with the chapter three, where the network configuration will be described, both in termsofequipment, such as design IP mapping of the entire network, separating the service network and management network, for increased the reliability and efficiency of the network. It will be completed with the description of the management of the network, using different tools provide real-time monitoring of the entire system, making it possible, to anticipate and prevent any incidents that might cause a deficiency in the service being delivered to final user.
Resumo:
The phase equilibria in the Fe-Zn-O system in the range 900-1580degreesC in air have been experimentally studied using equilibration and quenching techniques. The compositions of the phases at equilibrium were determined using electron probe X-ray microanalysis (EPMA). The ferrous and ferric bulk iron concentrations were measured with a wet chemical analysis using the ammonium metavanadate technique. X-ray powder diffraction analysis (XRD) was used to characterise the phases. Iron oxide dissolved in zincite was found to be present principally in the ferric form. The XRD analysis and the composition measurements both indicate that zincite is the only phase stable in the ZnO-rich area in the range of conditions investigated. The solubility of the iron oxide in zincite rapidly increases at temperatures above 1200degreesC; the morphology of the zincite crystals also sharply changes between 1200 and 1300degreesC from rounded to plate-like crystals. The plate-like zincite forms a refractory network-the type of microstructure beneficial to the Imperial Smelting Process (ISP) sinter performance. The software program FactSage with a thermodynamically optimised database was used to predict phase equilibria in the Fe-Zn-O system.
Resumo:
The development of the distributed information measurement and control system for optical spectral research of particle beam and plasma objects and the execution of laboratory works on Physics and Engineering Department of Petrozavodsk State University are described. At the hardware level the system is represented by a complex of the automated workplaces joined into computer network. The key element of the system is the communication server, which supports the multi-user mode and distributes resources among clients, monitors the system and provides secure access. Other system components are formed by equipment servers (CАМАC and GPIB servers, a server for the access to microcontrollers MCS-196 and others) and the client programs that carry out data acquisition, accumulation and processing and management of the course of the experiment as well. In this work the designed by the authors network interface is discussed. The interface provides the connection of measuring and executive devices to the distributed information measurement and control system via Ethernet. This interface allows controlling of experimental parameters by use of digital devices, monitoring of experiment parameters by polling of analog and digital sensors. The device firmware is written in assembler language and includes libraries for Ethernet-, IP-, TCP- и UDP-packets forming.
Resumo:
How do local homeland security organizations respond to catastrophic events such as hurricanes and acts of terrorism? Among the most important aspects of this response are these organizations ability to adapt to the uncertain nature of these "focusing events" (Birkland 1997). They are often behind the curve, seeing response as a linear process, when in fact it is a complex, multifaceted process that requires understanding the interactions between the fiscal pressures facing local governments, the institutional pressures of working within a new regulatory framework and the political pressures of bringing together different levels of government with different perspectives and agendas. ^ This dissertation has focused on tracing the factors affecting the individuals and institutions planning, preparing, responding and recovering from natural and man-made disasters. Using social network analysis, my study analyzes the interactions between the individuals and institutions that respond to these "focusing events." In practice, it is the combination of budgetary, institutional, and political pressures or constraints interacting with each other which resembles a Complex Adaptive System (CAS). ^ To investigate this system, my study evaluates the evolution of two separate sets of organizations composed of first responders (Fire Chiefs, Emergency Management Coordinators) and community volunteers organized in the state of Florida over the last fifteen years. Using a social network analysis approach, my dissertation analyzes the interactions between Citizen Corps Councils (CCCs) and Community Emergency Response Teams (CERTs) in the state of Florida from 1996–2011. It is the pattern of interconnections that occur over time that are the focus of this study. ^ The social network analysis revealed an increase in the amount and density of connections between these organizations over the last fifteen years. The analysis also exposed the underlying patterns in these connections; that as the networks became more complex they also became more decentralized though not in any uniform manner. The present study brings to light a story of how communities have adapted to the ever changing circumstances that are sine qua non of natural and man-made disasters.^
Resumo:
Annual Average Daily Traffic (AADT) is a critical input to many transportation analyses. By definition, AADT is the average 24-hour volume at a highway location over a full year. Traditionally, AADT is estimated using a mix of permanent and temporary traffic counts. Because field collection of traffic counts is expensive, it is usually done for only the major roads, thus leaving most of the local roads without any AADT information. However, AADTs are needed for local roads for many applications. For example, AADTs are used by state Departments of Transportation (DOTs) to calculate the crash rates of all local roads in order to identify the top five percent of hazardous locations for annual reporting to the U.S. DOT. ^ This dissertation develops a new method for estimating AADTs for local roads using travel demand modeling. A major component of the new method involves a parcel-level trip generation model that estimates the trips generated by each parcel. The model uses the tax parcel data together with the trip generation rates and equations provided by the ITE Trip Generation Report. The generated trips are then distributed to existing traffic count sites using a parcel-level trip distribution gravity model. The all-or-nothing assignment method is then used to assign the trips onto the roadway network to estimate the final AADTs. The entire process was implemented in the Cube demand modeling system with extensive spatial data processing using ArcGIS. ^ To evaluate the performance of the new method, data from several study areas in Broward County in Florida were used. The estimated AADTs were compared with those from two existing methods using actual traffic counts as the ground truths. The results show that the new method performs better than both existing methods. One limitation with the new method is that it relies on Cube which limits the number of zones to 32,000. Accordingly, a study area exceeding this limit must be partitioned into smaller areas. Because AADT estimates for roads near the boundary areas were found to be less accurate, further research could examine the best way to partition a study area to minimize the impact.^
Resumo:
Existing studies that question the role of planning as a state institution, whose interests it serves together with those disputing the merits of collaborative planning are all essentially concerned with the broader issue of power in society. Although there have been various attempts to highlight the distorting effects of power, the research emphasis to date has been focused on the operation of power within the formal structures that constitute the planning system. As a result, relatively little attention has been attributed to the informal strategies or tactics that can be utilised by powerful actors to further their own interests. This article seeks to address this gap by identifying the informal strategies used by the holders of power to bypass the formal structures of the planning system and highlight how these procedures are to a large extent systematic and (almost) institutionalised in a shadow planning system. The methodology consists of a series of semi-structured qualitative interviews with 20 urban planners working across four planning authorities within the Greater Dublin Area, Ireland. Empirical findings are offered that highlight the importance of economic power in the emergence of what essentially constitutes a shadow planning system. More broadly, the findings suggest that much more cognisance of the structural relations that govern how power is distributed in society is required and that ‘light touch’ approaches that focus exclusively on participation and deliberation need to be replaced with more radical solutions that look towards the redistribution of economic power between stakeholders.
Resumo:
The development of new learning models has been of great importance throughout recent years, with a focus on creating advances in the area of deep learning. Deep learning was first noted in 2006, and has since become a major area of research in a number of disciplines. This paper will delve into the area of deep learning to present its current limitations and provide a new idea for a fully integrated deep and dynamic probabilistic system. The new model will be applicable to a vast number of areas initially focusing on applications into medical image analysis with an overall goal of utilising this approach for prediction purposes in computer based medical systems.
Resumo:
This thesis is a research about the recent complex spatial changes in Namibia and Tanzania and local communities’ capacity to cope with, adapt to and transform the unpredictability engaged to these processes. I scrutinise the concept of resilience and its potential application to explaining the development of local communities in Southern Africa when facing various social, economic and environmental changes. My research is based on three distinct but overlapping research questions: what are the main spatial changes and their impact on the study areas in Namibia and Tanzania? What are the adaptation, transformation and resilience processes of the studied local communities in Namibia and Tanzania? How are innovation systems developed, and what is their impact on the resilience of the studied local communities in Namibia and Tanzania? I use four ethnographic case studies concerning environmental change, global tourism and innovation system development in Namibia and Tanzania, as well as mixed-methodological approaches, to study these issues. The results of my empirical investigation demonstrate that the spatial changes in the localities within Namibia and Tanzania are unique, loose assemblages, a result of the complex, multisided, relational and evolutional development of human and non-human elements that do not necessarily have linear causalities. Several changes co-exist and are interconnected though uncertain and unstructured and, together with the multiple stressors related to poverty, have made communities more vulnerable to different changes. The communities’ adaptation and transformation measures have been mostly reactive, based on contingency and post hoc learning. Despite various anticipation techniques, coping measures, adaptive learning and self-organisation processes occurring in the localities, the local communities are constrained by their uneven power relationships within the larger assemblages. Thus, communities’ own opportunities to increase their resilience are limited without changing the relations in these multiform entities. Therefore, larger cooperation models are needed, like an innovation system, based on the interactions of different actors to foster cooperation, which require collaboration among and input from a diverse set of stakeholders to combine different sources of knowledge, innovation and learning. Accordingly, both Namibia and Tanzania are developing an innovation system as their key policy to foster transformation towards knowledge-based societies. Finally, the development of an innovation system needs novel bottom-up approaches to increase the resilience of local communities and embed it into local communities. Therefore, innovation policies in Namibia have emphasised the role of indigenous knowledge, and Tanzania has established the Living Lab network.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
BACKGROUND: Spontaneously hypertensive rats develop left ventricular hypertrophy, increased blood pressure and blood pressure variability, which are important determinants of heart damage, like the activation of renin-angiotensin system. AIMS: To investigate the effects of the time-course of hypertension over 1) hemodynamic and autonomic patterns (blood pressure; blood pressure variability; heart rate); 2) left ventricular hypertrophy; and 3) local and systemic Renin-angiotensin system of the spontaneously hypertensive rats. METHODS: Male spontaneously hypertensive rats were randomized into two groups: young (n=13) and adult (n=12). Hemodynamic signals (blood pressure, heart rate), blood pressure variability (BPV) and spectral analysis of the autonomic components of blood pressure were analyzed. LEFT ventricular hypertrophy was measured by the ratio of LV mass to body weight (mg/g), by myocyte diameter (μm) and by relative fibrosis area (RFA, %). ACE and ACE2 activities were measured by fluorometry (UF/min), and plasma renin activity (PRA) was assessed by a radioimmunoassay (ng/mL/h). Cardiac gene expressions of Agt, Ace and Ace2 were quantified by RT-PCR (AU). RESULTS: The time-course of hypertension in spontaneously hypertensive rats increased BPV and reduced the alpha index in adult spontaneously hypertensive rats. Adult rats showed increases in left ventricular hypertrophy and in RFA. Compared to young spontaneously hypertensive rats, adult spontaneously hypertensive rats had lower cardiac ACE and ACE2 activities, and high levels of PRA. No change was observed in gene expression of Renin-angiotensin system components. CONCLUSIONS: The observed autonomic dysfunction and modulation of Renin-angiotensin system activity are contributing factors to end-organ damage in hypertension and could be interacting. Our findings suggest that the management of hypertensive disease must start before blood pressure reaches the highest stable levels and the consequent established end-organ damage is reached.
Resumo:
A hydrodynamic characterization of the Itapocu river and Barra Velha lagoon estuarine system was carried out with the objective of evaluating how the current regime in this area is affected by astronomical and meteorological tides and the river discharge. Meteorological, water level and current velocity and direction data were gathered hourly during a twenty-day period, from 22-July until 10-August, 2004. Current meters were positioned at the inlet, at the entrance of the north and south lagoons and at the lower estuary of the river along with a tide gauge. The estuarine system showed distinct current behavior among the different sectors within the estuary, responding to the different forcings. The strongest currents were observed at the inlet while the weakest values were observed at the northern lagoon, a location that showed little dynamic. The general flow was ebb-dominated flux, in response to fluvial discharge, even during local wind water set-up event.
Resumo:
The design of a lateral line for drip irrigation requires accurate evaluation of head losses in not only the pipe but in the emitters as well. A procedure was developed to determine localized head losses within the emitters by the formulation of a mathematical model that accounts for the obstruction caused by the insertion point. These localized losses can be significant when compared with tire total head losses within the system due to the large number of emitters typically installed along the lateral line. Air experiment was carried out by altering flow characteristics to create Reynolds numbers (R) from 7,480 to 32,597 to provide turbulent flow and a maximum velocity of 2.0 m s(-1). The geometry of the emitter was determined by an optical projector and sensor An equation was formulated to facilitate the localized head loss calculation using the geometric characteristics of the emitter (emitter length, obstruction ratio, and contraction coefficient). The mathematical model was tested using laboratory measurements on four emitters. The local head loss was accurately estimated for the Uniram (difference of +13.6%) and Drip Net (difference of +7.7%) emitters, while appreciable deviations were found for the Twin Plus (-21.8%) and Tiran (+50%) emitters. The head loss estimated by the model was sensitive to the variations in the obstruction area of the emitter However, the variations in the local head loss did not result in significant variations in the maximum length of the lateral lines. In general, for all the analyzed emitters, a 50% increase in the local head loss for the emitters resulted in less than an 8% reduction in the maximum lateral length.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
We describe an estimation technique for biomass burning emissions in South America based on a combination of remote-sensing fire products and field observations, the Brazilian Biomass Burning Emission Model (3BEM). For each fire pixel detected by remote sensing, the mass of the emitted tracer is calculated based on field observations of fire properties related to the type of vegetation burning. The burnt area is estimated from the instantaneous fire size retrieved by remote sensing, when available, or from statistical properties of the burn scars. The sources are then spatially and temporally distributed and assimilated daily by the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS) in order to perform the prognosis of related tracer concentrations. Three other biomass burning inventories, including GFEDv2 and EDGAR, are simultaneously used to compare the emission strength in terms of the resultant tracer distribution. We also assess the effect of using the daily time resolution of fire emissions by including runs with monthly-averaged emissions. We evaluate the performance of the model using the different emission estimation techniques by comparing the model results with direct measurements of carbon monoxide both near-surface and airborne, as well as remote sensing derived products. The model results obtained using the 3BEM methodology of estimation introduced in this paper show relatively good agreement with the direct measurements and MOPITT data product, suggesting the reliability of the model at local to regional scales.
Resumo:
Sedimentary organic matter is a good tool for environmental evaluation where the sediments are deposited. We determined the elemental and C- and N-isotopic compositions of 211 sub-surface sediment samples from 13 cores (ranging from 18 to 46cm), collected in the Cananeia-Iguape estuarine-lagoonal system. The aim of this research is to evaluate the environmental variations of this tropical coastal micro-tidal system over the last decades, through SOM distribution. The studied parameters show differences between the cores located in the northern (sandy-silt sediments) and southern (sand and silty-sand) portions. The whole area presents a mixed organic matter origin signature (local mangrove plants: < -25.6 parts per thousand PDB/ phytoplancton delta(13)C values: -19.4 parts per thousand PDB). The northern cores, which submitted higher sedimentation deposition (1.46cm year(-1)), are more homogenous, presenting lower delta(13)C (< -25.2 parts per thousand PDB) and higher C/N values (in general >14), directly related to the terrestrial input from Ribeira de Iguape River (24,000 km(2) basin). The southern portion presents lower sedimentation rates (0.38cm year(-1)) and is associated to a small river basin (1,340 km(2)), presenting values Of delta(13)C: -25.0 to 23.0 parts per thousand PDB and of C/N ratio: 11 to 15. In general, the elemental contents in the 15 cores may be considered from low to medium (< 2.0% C - < 0.1% N), compared to similar environments. Although a greater marine influence is observed in the southern system portion, the majority of the cores present an elevated increase of continental deposition, most likely related to the strong silting process that the area has been subjected to since the 1850s, when an artificial channel was built linking, directly, the Ribeira River to the estuarine-lagoonal system.