952 resultados para Wide-Area Measurements


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The geophysical methods are widely applied in environmental characterization and monitoring studies. The resistivity method, in particular, has a wide area of applications, being effective in studies of solid waste landfills. The present work propose a geophysical monitoring in the Cordeirópolis city controlled landfill and analyze relationships between variation of electrical resistivity parameter, the residence time of the solid waste in landfill, the rainfall in the region and the organic matter biodegradation processes. The study has no monitoring system to control the products generated in the organic matter decomposition found in waste such as sealing blanket or leachate or gas drains. The results shows that the electrical resistivity parameter was effective in monitoring the landfill contamination plume

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article, we tackle the issue of youth and drugs as something linked to biopower and biopolitics, both concepts developed by Michael Foucault. Youth and drugs are taken and analyzed in situations involving the management of crime linked to the risks and deviations from the law, abuse and dependence. The youth; irreverent, courageous, healthy, idealistic, and that wanted to change the world for the better as we have seen in the past, is now strongly related to violence, dangerous activities, moral and social risks, drug addiction, criminality, and others negative images. To deal with these young people, tolerance and small punishments of yore are not enough anymore. The young people emerge as a segment of the population subject to various actions and programs. The drugs now are seen as matters of security and public health. There is a shifting and repositioning in the discourse about the young - from minor, drugged, and criminal to lawbreaker, user and drug addict. The change is subtle, but represents a modulation in the devices of social control. Beyond the consent of the young to get rid of drugs, there is a search for the creation of a wide area of monitoring of their behavior through the activation of community protection networks. The belief that the young are more impressionable and vulnerable, and that action on the cause of the problem or risk reduction are the most efficient ways of management, taking responsibility away from personal and family sphere and transferring it to the State, contributes to the increasing control of young people nowadays.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A transparent (wide-area) wavelength-routed optical network may be constructed by using wavelength cross-connect switches connected together by fiber to form an arbitrary mesh structure. The network is accessed through electronic stations that are attached to some of these cross-connects. These wavelength cross-connect switches have the property that they may configure themselves into unspecified states. Each input port of a switch is always connected to some output port of the switch whether or not such a connection is required for the purpose of information transfer. Due to the presence of these unspecified states, there exists the possibility of setting up unintended alloptical cycles in the network (viz., a loop with no terminating electronics in it). If such a cycle contains amplifiers [e.g., Erbium- Doped Fiber Amplifiers (EDFA’s)], there exists the possibility that the net loop gain is greater than the net loop loss. The amplified spontaneous emission (ASE) noise from amplifiers can build up in such a feedback loop to saturate the amplifiers and result in oscillations of the ASE noise in the loop. Such all-optical cycles as defined above (and hereafter referred to as “white” cycles) must be eliminated from an optical network in order for the network to perform any useful operation. Furthermore, for the realistic case in which the wavelength cross-connects result in signal crosstalk, there is a possibility of having closed cycles with oscillating crosstalk signals. We examine algorithms that set up new transparent optical connections upon request while avoiding the creation of such cycles in the network. These algorithms attempt to find a route for a connection and then (in a post-processing fashion) configure switches such that white cycles that might get created would automatically get eliminated. In addition, our call-set-up algorithms can avoid the possibility of crosstalk cycles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, there has been growing interest in developing optical fiber networks to support the increasing bandwidth demands of multimedia applications, such as video conferencing and World Wide Web browsing. One technique for accessing the huge bandwidth available in an optical fiber is wavelength-division multiplexing (WDM). Under WDM, the optical fiber bandwidth is divided into a number of nonoverlapping wavelength bands, each of which may be accessed at peak electronic rates by an end user. By utilizing WDM in optical networks, we can achieve link capacities on the order of 50 THz. The success of WDM networks depends heavily on the available optical device technology. This paper is intended as a tutorial on some of the optical device issues in WDM networks. It discusses the basic principles of optical transmission in fiber and reviews the current state of the art in optical device technology. It introduces some of the basic components in WDM networks, discusses various implementations of these components, and provides insights into their capabilities and limitations. Then, this paper demonstrates how various optical components can be incorporated into WDM optical networks for both local and wide-area applications. Last, the paper provides a brief review of experimental WDM networks that have been implemented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer and telecommunication networks are changing the world dramatically and will continue to do so in the foreseeable future. The Internet, primarily based on packet switches, provides very flexible data services such as e-mail and access to the World Wide Web. The Internet is a variable-delay, variable- bandwidth network that provides no guarantee on quality of service (QoS) in its initial phase. New services are being added to the pure data delivery framework of yesterday. Such high demands on capacity could lead to a “bandwidth crunch” at the core wide-area network, resulting in degradation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end user to overcome the Internet’s well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (e.g., twisted pair and cable) to optical fibers - in wide-area, metropolitan-area, and even local-area settings. In order to exploit the immense bandwidth potential of optical fiber, interesting multiplexing techniques have been developed over the years.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The bandwidth requirements of the Internet are increasing every day and there are newer and more bandwidth-thirsty applications emerging on the horizon. Wavelength division multiplexing (WDM) is the next step towards leveraging the capabilities of the optical fiber, especially for wide-area backbone networks. The ability to switch a signal at intermediate nodes in a WDM network based on their wavelengths is known as wavelength-routing. One of the greatest advantages of using wavelength-routing WDM is the ability to create a virtual topology different from the physical topology of the underlying network. This virtual topology can be reconfigured when necessary, to improve performance. We discuss the previous work done on virtual topology design and also discuss and propose different reconfiguration algorithms applicable under different scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To open this Third Vertebrate Pest Conference is a real privilege. It is a pleasure to welcome all of you in attendance, and I know there are others who would like to be meeting with us, but, for one reason or another cannot be. However, we can serve them by taking back the results of discussion and by making available the printed transactions of what is said here. It has been the interest and demand for the proceedings of the two previous conferen- ces which, along with personal contacts many of you have with the sponsoring committee, have gauged the need for continuing these meetings. The National Pest Control Association officers who printed the 1962 proceedings still are supplying copies of that conference. Two reprintings of the 1964 conference have been necessary and repeat orders from several universities indicate that those proceedings have become textbooks for special classes. When Dr. Howard mentioned in opening the first Conference in 1962 that publication of those papers would make a valuable handbook of animal control, he was prophetic, indeed. We are pleased that this has happened, but not surprised, since to many of us in this specialized field, the conferences have provided a unique opportunity to meet colleagues with similar interests, to exchange information on control techniques and to be informed by research workers of problem solving investigations as well as to hear of promising basic research. The development of research is a two-way street and we think these conferences also identify areas of inadequate knowledge, thereby stimulating needed research. We have represented here a number of types of specialists—animal ecologists, public health and transmissible disease experts, control methods specialists, public agency administration and enforcement staffs, agricultural extension people, manufacturing and sale industry representatives, commercial pest control operators, and others—and in addition to improving communications among these professional groups an equally important purpose of these conferences is to improve understanding between them and the general public. Within the term general public are many individuals and also organizations dedicated to appreciation and protection of certain animal forms or animal life in general. Proper concepts of vertebrate pest control do not conflict with such views. It is worth repeating for the record the definition of "vertebrate pest" which has been stated at our previous conferences. "A vertebrate pest is any native or introduced, wild or feral, non-human spe- cies of vertebrate animal that is currently troublesome locally or over a wide area to one or more persons either by being a general nuisance, a health hazard or by destroying food or natural resources. In other words, vertebrate pest status is not an inherent quality or fixed classification but is a circumstantial relationship to man's interests." I believe progress has been made in reducing the misunderstanding and emotion with which vertebrate pest control was formerly treated whenever a necessity for control was stated. If this is true, I likewise believe it is deserved, because control methods and programs have progressed. Control no longer refers only to population reductions by lethal means. We have learned something of alternate control approaches and the necessity for studying the total environment; where reduction of pest animal numbers is the required solution to a problem situation we have a wider choice of more selective, safe and efficient materials. Although increased attention has been given to control methods, research when we take a close look at the severity of animal damage to so many facets of our economy, particularly to agricultural production and public health, we realize it still is pitifully small and slow. The tremendous acceleration of the world's food and health requirements seems to demand expediting vertebrate pest control to effectively neutralize the enormous impact of animal damage to vital resources. The efforts we are making here at problem delineation, idea communication and exchange of methodology could well serve as both nucleus and rough model for a broader application elsewhere. I know we all hope this Third Conference will advance these general objectives, and I think there is no doubt of its value in increasing our own scope of information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: DON, a serious complication of GO, is frequently difficult to diagnose clinically in its early stages because of confounding signs and symptoms of congestive orbitopathy. We evaluated the ability of square area measurements of orbital apex crowding, calculated with MDCT, to detect DON. MATERIALS AND METHODS: Fifty-six patients with GO were studied prospectively with complete neuro-ophthalmologic examination and MDCT scanning. Square measurements were taken from coronal sections 12 mm, 18 mm, and 24 mm from the interzygomatic line. The ratio between the extraocular muscle area and the orbital bone area was used as a Cl. Intracranial fat prolapse through the superior orbital fissure was recorded as present or absent. Severity of optic nerve crowding was also subjectively graded on corona! images. Orbits were divided into 2 groups (with or without clinical evidence of DON) and compared. RESULTS: Ninety-five orbits (36 with and 59 without DON) were studied. The CIs at all 3 levels and the subjective crowding score were significantly greater in orbits with DON (P<.001). No significant difference was observed regarding intracranial fat prolapse (P=.105). The area under the ROC curves was 0.91, 0.93, and 0.87 for CIs at 12, 18, and 24 mm, respectively. The best performance was at 18 mm, where a cutoff value of 57.5% corresponded to 91.7% sensitivity, 89.8% specificity, and an odds ratio of 97.2 for detecting DON. A significant correlation (P<.001) between the CIs and VF defects was observed. CONCLUSIONS: Orbital Cls based on area measurements were found to predict DON more reliably than subjective grading of orbital crowding or intracranial fat prolapse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The future hydrogen demand is expected to increase, both in existing industries (including upgrading of fossil fuels or ammonia production) and in new technologies, like fuel cells. Nowadays, hydrogen is obtained predominantly by steam reforming of methane, but it is well known that hydrocarbon based routes result in environmental problems and besides the market is dependent on the availability of this finite resource which is suffering of rapid depletion. Therefore, alternative processes using renewable sources like wind, solar energy and biomass, are now being considered for the production of hydrogen. One of those alternative methods is the so-called “steam-iron process” which consists in the reduction of a metal-oxide by hydrogen-containing feedstock, like ethanol for instance, and then the reduced material is reoxidized with water to produce “clean” hydrogen (water splitting). This kind of thermochemical cycles have been studied before but currently some important facts like the development of more active catalysts, the flexibility of the feedstock (including renewable bio-alcohols) and the fact that the purification of hydrogen could be avoided, have significantly increased the interest for this research topic. With the aim of increasing the understanding of the reactions that govern the steam-iron route to produce hydrogen, it is necessary to go into the molecular level. Spectroscopic methods are an important tool to extract information that could help in the development of more efficient materials and processes. In this research, ethanol was chosen as a reducing fuel and the main goal was to study its interaction with different catalysts having similar structure (spinels), to make a correlation with the composition and the mechanism of the anaerobic oxidation of the ethanol which is the first step of the steam-iron cycle. To accomplish this, diffuse reflectance spectroscopy (DRIFTS) was used to study the surface composition of the catalysts during the adsorption of ethanol and its transformation during the temperature program. Furthermore, mass spectrometry was used to monitor the desorbed products. The set of studied materials include Cu, Co and Ni ferrites which were also characterized by means of X-ray diffraction, surface area measurements, Raman spectroscopy, and temperature programmed reduction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the high price of natural oil and harmful effects of its usage, as the increase in emission of greenhouse gases, the industry focused in searching of sustainable types of the raw materials for production of chemicals. Ethanol, produced by fermentation of sugars, is one of the more interesting renewable materials for chemical manufacturing. There are numerous applications for the conversion of ethanol into commodity chemicals. In particular, the production of 1,3-butadiene whose primary source is ethanol using multifunctional catalysts is attractive. With the 25% of world rubber manufacturers utilizing 1,3-butadiene, there is an exigent need for its sustainable production. In this research, the conversion of ethanol in one-step process to 1,3-butadiene was studied. According to the literature, the mechanisms which were proposed to explain the way ethanol transforms into butadiene require to have both acid and basic sites. But still, there are a lot of debate on this topic. Thus, the aim of this research work is a better understanding of the reaction pathways with all the possible intermediates and products which lead to the formation of butadiene from ethanol. The particular interests represent the catalysts, based on different ratio Mg/Si in comparison to bare magnesia and silica oxides, in order to identify a good combination of acid/basic sites for the adsorption and conversion of ethanol. Usage of spectroscopictechniques are important to extract information that could be helpful for understanding the processes on the molecular level. The diffuse reflectance infrared spectroscopy coupled to mass spectrometry (DRIFT-MS) was used to study the surface composition of the catalysts during the adsorption of ethanol and its transformation during the temperature program. Whereas, mass spectrometry was used to monitor the desorbed products. The set of studied materials include MgO, Mg/Si=0.1, Mg/Si=2, Mg/Si=3, Mg/Si=9 and SiO2 which were also characterized by means of surface area measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: A previous study of radiofrequency neurotomy of the articular branches of the obturator nerve for hip joint pain produced modest results. Based on an anatomical and radiological study, we sought to define a potentially more effective radiofrequency method. DESIGN: Ten cadavers were studied, four of them bilaterally. The obturator nerve and its articular branches were marked by wires. Their radiological relationship to the bone structures on fluoroscopy was imaged and analyzed. A magnetic resonance imaging (MRI) study was undertaken on 20 patients to determine the structures that would be encountered by the radiofrequency electrode during different possible percutaneous approaches. RESULTS: The articular branches of the obturator nerve vary in location over a wide area. The previously described method of denervating the hip joint did not take this variation into account. Moreover, it approached the nerves perpendicularly. Because optimal coagulation requires electrodes to lie parallel to the nerves, a perpendicular approach probably produced only a minimal lesion. In addition, MRI demonstrated that a perpendicular approach is likely to puncture femoral vessels. Vessel puncture can be avoided if an oblique pass is used. Such an approach minimizes the angle between the target nerves and the electrode, and increases the likelihood of the nerve being captured by the lesion made. Multiple lesions need to be made in order to accommodate the variability in location of the articular nerves. CONCLUSIONS: The method that we described has the potential to produce complete and reliable nerve coagulation. Moreover, it minimizes the risk of penetrating the great vessels. The efficacy of this approach should be tested in clinical trials.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the current market system, power systems are operated at higher loads for economic reasons. Power system stability becomes a genuine concern in such operating conditions. In case of failure of any larger component, the system may become stressed. These events may start cascading failures, which may lead to blackouts. One of the main reasons of the major recorded blackout events has been the unavailability of system-wide information. Synchrophasor technology has the capability to provide system-wide real time information. Phasor Measurement Units (PMUs) are the basic building block of this technology, which provide the Global Positioning System (GPS) time-stamped voltage and current phasor values along with the frequency. It is being assumed that synchrophasor data of all the buses is available and thus the whole system is fully observable. This information can be used to initiate islanding or system separation to avoid blackouts. A system separation strategy using synchrophasor data has been developed to answer the three main aspects of system separation: (1) When to separate: One class support machines (OC-SVM) is primarily used for the anomaly detection. Here OC-SVM was used to detect wide area instability. OC-SVM has been tested on different stable and unstable cases and it is found that OC-SVM has the capability to detect the wide area instability and thus is capable to answer the question of “when the system should be separated”. (2) Where to separate: The agglomerative clustering technique was used to find the groups of coherent buses. The lines connecting different groups of coherent buses form the separation surface. The rate of change of the bus voltage phase angles has been used as the input to this technique. This technique has the potential to exactly identify the lines to be tripped for the system separation. (3) What to do after separation: Load shedding was performed approximately equal to the sum of power flows along the candidate system separation lines should be initiated before tripping these lines. Therefore it is recommended that load shedding should be initiated before tripping the lines for system separation.