970 resultados para Nets (Geodesy)
Resumo:
This present Thesis, is explorer work and presents an analysis of e-wastes of the industry of cellular mobile telephony, evaluating the evolution of the telecommunications nets and as if it holds the global and Brazilian market of cellular telephony. It approaches the elements gifts in the cellular devices that can badly cause to the environment and the health, the discarding of the devices in end of life cycle is made. It analyzes the new European regulation of electric equipment residues and electronic, the WEEE, as it influenced the strategy of the companies manufacturers of mobile phone cellular and of that she forms is possible to create a Brazilian national industry for recycling of devices of cellular, with conditions to globally competition. For this some possible models of being implanted in Brazil are presented. The project of law 203/91 on solid residues is argued and as it would be interesting if to persist some proposals presented to the project, to create a Brazilian market of recycling with capacity of global competition for use to advantage of the European regulation if to get a competitive advantage
Resumo:
The last decade, characterized by the vertiginous growth of the computers worldwide net, brought radical changes in the use of the information and communication. Internet s use at business world has been largely studied; however, few are the researches about the academic use of this technology, mainly if we take into consideration institutions of technologic education. In this context, this research made an analysis of internet s use in a technologic education institution in Brazil, analyzing, in particular, the Centro Federal de Educação Tecnológica do Rio Grande do Norte CEFET/RN for that standard use of this Information Technology (IT) tools and, at the same time, studying the determinant factors of this use. To reach the considered objectives, a survey research was effected, be given data collected daily through the research s questionnaire application to 150 teachers who answered a set of closed and scaled questions. The quantitative data were qualitatively analyzed, arriving a some significant results related to the standard use and the factors that influenced in the use of these Internet technologies, like: the age scale, the exposition s to the computer level, the area of academic graduation, the area of knowledge where acts and the title, exert significant influence in the academic use of Internet between the professors
Resumo:
At present, the network re appear like alternatives of find forms in arrangements inter-organizational that offer his members better opportunities of modernizes technological, insertion in the market, managerial and at the same time permit the exchange of experiences, information and knowledge for fortify and compete of equal for equal, permitting brighten up the harmful effects of the compete wild. A threat that can be minimized by the organization in net and the use of other forms interorganizational and strategic alliances. Therefore the demonstrates the context of the competitive cooperation in network, seeking criteria of analysis of that phenomenon organizational, on the basis of a study of case of the net ITCPs and in the literature researched, aimed with that criterion and indicator for creation of a model of evaluation in the future (Factor Net) that be prominent for the drawing of those forms organizational you maintained, supportive, competitive and cooperative
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
The monitoring of patients performed in hospitals is usually done either in a manual or semiautomated way, where the members of the healthcare team must constantly visit the patients to ascertain the health condition in which they are. The adoption of this procedure, however, compromises the quality of the monitoring conducted since the shortage of physical and human resources in hospitals tends to overwhelm members of the healthcare team, preventing them from moving to patients with adequate frequency. Given this, many existing works in the literature specify alternatives aimed at improving this monitoring through the use of wireless networks. In these works, the network is only intended for data traffic generated by medical sensors and there is no possibility of it being allocated for the transmission of data from applications present in existing user stations in the hospital. However, in the case of hospital automation environments, this aspect is a negative point, considering that the data generated in such applications can be directly related to the patient monitoring conducted. Thus, this thesis defines Wi-Bio as a communication protocol aimed at the establishment of IEEE 802.11 networks for patient monitoring, capable of enabling the harmonious coexistence among the traffic generated by medical sensors and user stations. The formal specification and verification of Wi-Bio were made through the design and analysis of Petri net models. Its validation was performed through simulations with the Network Simulator 2 (NS2) tool. The simulations of NS2 were designed to portray a real patient monitoring environment corresponding to a floor of the nursing wards sector of the University Hospital Onofre Lopes (HUOL), located at Natal, Rio Grande do Norte. Moreover, in order to verify the feasibility of Wi-Bio in terms of wireless networks standards prevailing in the market, the testing scenario was also simulated under a perspective in which the network elements used the HCCA access mechanism described in the IEEE 802.11e amendment. The results confirmed the validity of the designed Petri nets and showed that Wi-Bio, in addition to presenting a superior performance compared to HCCA on most items analyzed, was also able to promote efficient integration between the data generated by medical sensors and user applications on the same wireless network
Resumo:
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Resumo:
The Ethernet technology dominates the market of computer local networks. However, it was not been established as technology for industrial automation set, where the requirements demand determinism and real-time performance. Many solutions have been proposed to solve the problem of non-determinism, which are based mainly on TDMA (Time Division Multiple Access), Token Passing and Master-Slave. This work of research carries through measured of performance that allows to compare the behavior of the Ethernet nets when submitted with the transmissions of data on protocols UDP and RAW Ethernet, as well as, on three different types of Ethernet technologies. The objective is to identify to the alternative amongst the protocols and analyzed Ethernet technologies that offer to a more satisfactory support the nets of the industrial automation and distributed real-time application
Resumo:
Este trabalho teve como objetivo avaliar a eficiência do herbicida diquat no controle de plantas de Eichhornia crassipes e Brachiaria subquadripara em condições de reservatório. As parcelas experimentais apresentavam 650 m² e foram delimitadas e posicionadas por redes de pesca nas margens do reservatório de Salto Grande, Americana-SP. Os tratamentos testados foram: herbicida diquat aplicado na formulação Reward, nas doses de 960 g i.a. ha-1 (duas aplicações de 480 g, com intervalo de 15 dias), 960 g i.a. ha-1 (aplicação única), 1.920 g i.a. ha-1 (duas aplicações de 960 g, com intervalo de 15 dias), além de uma testemunha sem aplicação do herbicida. Para monitoramento das concentrações de diquat, foram coletadas amostras de água antes da aplicação do herbicida e 1, 2, 4, 7, 14 e 28 dias após aplicação (DAA) na área experimental e na montante e jusante da barragem. A análise da eficiência do diquat foi realizada através de avaliações visuais de controle no período (1, 2, 4, 7, 14, 21 e 28 DAA do herbicida) e aos dois dias após a segunda aplicação. O herbicida diquat mostrou-se eficiente no controle das plantas de E. crassipes, independentemente da dose e do manejo de aplicação, e seu efeito sobre as plantas de B. subquadripara foi temporário.
Resumo:
The ionospheric effect is one of the major errors in GPS data processing over long baselines. As a dispersive medium, it is possible to compute its influence on the GPS signal with the ionosphere-free linear combination of L1 and L2 observables, requiring dual-frequency receivers. In the case of single-frequency receivers, ionospheric effects are either neglected or reduced by using a model. In this paper, an alternative for single-frequency users is proposed. It involves multiresolution analysis (MRA) using a wavelet analysis of the double-difference observations to remove the short- and medium-scale ionosphere variations and disturbances, as well as some minor tropospheric effects. Experiments were carried out over three baseline lengths from 50 to 450 km, and the results provided by the proposed method were better than those from dual-frequency receivers. The horizontal root mean square was of about 0.28 m (1 sigma).
Resumo:
GPS precise point positioning (PPP) can provide high precision 3-D coordinates. Combined pseudorange and carrier phase observables, precise ephemeris and satellite clock corrections, together with data from dual frequency receivers, are the key factors for providing such levels of precision (few centimeters). In general, results obtained from PPP are referenced to an arbitrary reference frame, realized from a previous free network adjustment, in which satellite state vectors, station coordinates and other biases are estimated together. In order to obtain consistent results, the coordinates have to be transformed to the relevant reference frame and the appropriate daily transformation parameters must be available. Furthermore, the coordinates have to be mapped to a chosen reference epoch. If a velocity field is not available, an appropriated model, such as NNR-NUVEL-IA, has to be used. The quality of the results provided by this approach was evaluated using data from the Brazilian Network for Continuous Monitoring of the Global Positioning System (RBMC), which was processed using GIPSY-OASIS 11 software. The results obtained were compared to SIRGAS 1995.4 and ITRF2000, and reached precision better than 2cm. A description of the fundamentals of the PPP approach and its application in the integration of regional GPS networks with ITRF is the main purpose of this paper.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.
Resumo:
The Global Positioning System (GPS) transmits signals in two frequencies. It allows the correction of the first order ionospheric effect by using the ionosphere free combination. However, the second and third order ionospheric effects, which combined may cause errors of the order of centimeters in the GPS measurements, still remain. In this paper the second and third order ionospheric effects, which were taken into account in the GPS data processing in the Brazilian region, were investigated. The corrected and not corrected GPS data from these effects were processed in the relative and precise point positioning (PPP) approaches, respectively, using Bernese V5.0 software and the PPP software (GPSPPP) from NRCAN (Natural Resources Canada). The second and third order corrections were applied in the GPS data using an in-house software that is capable of reading a RINEX file and applying the corrections to the GPS observables, creating a corrected RINEX file. For the relative processing case, a Brazilian network with long baselines was processed in a daily solution considering a period of approximately one year. For the PPP case, the processing was accomplished using data collected by the IGS FORT station considering the period from 2001 to 2006 and a seasonal analysis was carried out, showing a semi-annual and an annual variation in the vertical component. In addition, a geographical variation analysis in the PPP for the Brazilian region has confirmed that the equatorial regions are more affected by the second and third order ionospheric effects than other regions.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
O Sistema de Posicionamento Global (GPS) transmite seus sinais em duas freqüências, o que permite eliminar matematicamente os efeitos de primeira ordem da ionosfera através da combinação linear ionosphere free. Porém, restam os efeitos de segunda e terceira ordem, os quais podem provocar erros da ordem de centímetros nas medidas GPS. Esses efeitos, geralmente, são negligenciados no processamento dos dados GPS. Os efeitos ionosféricos de primeira, segunda e terceira ordem são diretamente proporcionais ao TEC presente na ionosfera, porém, no caso dos efeitos de segunda e terceira ordem, comparecem também o campo magnético da Terra e a máxima densidade de elétrons, respectivamente. Nesse artigo, os efeitos de segunda e terceira ordem da ionosfera são investigados, sendo que foram levados em consideração no processamento de dados GPS na região brasileira para fins de posicionamento. Serão apresentados os modelos matemáticos associados a esses efeitos, as transformações envolvendo o campo magnético da Terra e a utilização do TEC advindo dos Mapas Globais da Ionosfera ou calculados a partir das observações GPS de pseudodistância. O processamento dos dados GPS foi realizado considerando o método relativo estático e cinemático e o posicionamento por ponto preciso (PPP). Os efeitos de segunda e terceira ordem foram analisados considerando períodos de alta e baixa atividade ionosférica. Os resultados mostraram que a não consideração desses efeitos no posicionamento por ponto preciso e no posicionamento relativo para linhas de base longas pode introduzir variações da ordem de poucos milímetros nas coordenadas das estações, além de variações diurnas em altitude da ordem de centímetros.