998 resultados para Internet (Redes de computação) - Métodos estatísticos
Resumo:
Enquadra-se o aquífero de Ourém no Sinclinal de Ourém e na Bacia Lusitaniana. Propõe-se um modelo conceptual do aquífero de Ourém que tem em consideração a cota da base e do topo e a espessura da formação geológica que o constitui. Estabelece-se um paralelo entre os Membros da Formação da Figueira da Foz e as características hidrogeológicas do aquífero. Avalia-se o regime de exploração por métodos estatísticos robustos, de onde se concluiu que a captação de água desregulada tem levado a uma descida constante dos níveis piezométricos, atingindo em algumas áreas os 7 cm/mês, independentemente da precipitação anual nos últimos anos. Uma campanha de monitorização definiu o sentido NW-SE como o sentido preferencial de fluxo e a área NW do aquífero de Ourém como a área preferencial de recarga. Analisam-se qualitativamente as condições de fronteira do aquífero. ABSTRACT: A tridimensional conceptual model of the Ourém aquifer is defined, considering its top and bottom. The thickness of Figueira da Foz geological formation was calculated. A parallel between the Members of Figueira da Foz formation and hydrogeological characteristics of the aquifer is established. A robust statistical analysis concludes that the unregulated water abstraction of aquifer has led to a constant decrease of the piezometric levels. ln some areas the decreasing achieves 7 cm/month, independently of the annual rainfall. A piezometric monitoring campaign defines the NW-SE direction inside the preferential flow direction of the aquifer and the area NW of aquifer as the preferred aquifer recharge area. The aquifer boundary conditions are qualitatively evaluated.
Resumo:
O gerenciamento de riscos climáticos requer informação sobre estados futuros de variáveis climáticas, geralmente representada por funções de distribuição de probabilidade acumulada (FDPA, P(Y?y) ou por sua funções complementares (P(Y>y)), ditas funções probabilidade de exceder (FPE). Uma variedade de métodos estatísticos tem sido utilizada para estimação de FPE, incluindo, modelos de regressão linear múltipla, regressão logística e métodos não paramétricos (MAIA et al, 2007; LO et al, 2008). Apesar de parecer intuitivo que a incerteza associada às estimativas das FPE é fundamental para os tomadores de decisão, esse tipo de informação raramente é fornecido. Modelos estatísticos de previsão baseados em séries históricas da variável de interesse (chuva, temperatura) e de preditores derivados de estados do oceano e da atmosfera (índices climáticos tais como: temperaturas da superfície do mar ? TSM, índice de oscilação sul, IOS, El Nino/Oscilação Sul - ENSO) se constituem em alternativas promissoras para auxílio às tomada de decisão, em escalas locais e regionais. O uso de tais indicadores permite incorporar mudanças de padrão derivadas de mudanças climáticas em modelos estatísticos que utilizam informação histórica. Neste trabalho, mostramos como o Modelo de Regressão de Cox (MRC; COX, 1972), tradicionalmente utilizado para modelagem de tempos de falha, em investigações na área médica e em ciências sociais, pode ser de grande utilidade para avaliação probabilística de riscos climáticos, mesmo para variáveis que não representam tempos de falha tais como chuva, produtividade de culturas, lucros, entre outras. O MRC pode ser utilizado para avaliar a influência de preditores (índices climáticos) sobre riscos de interesse (representados pelas FPE), estimar FPE para combinações específicas de preditores e incertezas associadas além de fornecer informação sobre riscos relativos, de grande valor para tomadores de decisão. Apresentamos dois estudos de caso nos quais o Modelo de Cox foi usado para investigar: a) o efeito do IOS e de um índice derivado de TSM do Pacífico sobre o início da estação chuvosa em Cairns (Austrália) e b) a influência o índice Nino 3.4, derivado de estados da TSM no Pacífico Equatorial sobre o chuva acumulada no período de Março a Junho em Limoeiro do Norte (Ceará, Brasil). O objetivo da apresentação desses estudos é meramente didático, para demonstrar o potencial do método proposto como ferramenta de auxílio à tomada de decisão.
Resumo:
Métodos estatísticos conhecidos como análise de sobrevivência, são comumente utilizados em medicina, ciências sociais e engenharia, em estudos onde a variável-resposta de interesse é o tempo até ocorrência de um evento (morte, divórcio, falha de um equipamento). Esses métodos permitem a estimação de curvas ditas funções de sobrevivência, que representam a probabilidade de ocorrência de um evento num tempo superior a t (Prob Y>t), para diferentes valores de t (KALBFLEISH e PRENTICE 1980; ALLISON, 1995; COLOSIMO e GIOLO, 2006). Na pesquisa agrícola, informações sobre eventos fenológicos, mensuradas em escala temporal (ex. tempo até o florescimento, tempo até a colheita), são fundamentais para o manejo eficiente da cultura. Apesar do seu uso generalizado nas áreas anteriormente citadas, a análise de sobrevivência ainda é pouco utilizada em estudos fenológicos (GIENAPP; HEMERIK; VISSER, 2005). A análise de sobrevivência apresenta uma série de vantagens em relação às abordagens tradicionais baseadas na duração média de estádios fenológicos, entre as quais: a) permite comparar o padrão de ocorrência do evento fenológico de interesse (floração, maturação) ao longo do tempo; b) possiblita estimar probabilidade de ocorrência de eventos em intervalos específicos, importantes para o planejamento de atividades de manejo ou comercialização; c) fornece informações sobre percentis (ex. data em que 50% das plantas floresceram, data em que 90% dos cachos atingiram o ponto de colheita); d) permite avaliar o efeito de tratamentos sobre as referidas medidas e e) não requerem os pressupostos de homogeneidade de variâncias nem normalidade.Neste trabalho apresentamos e discutimos o uso de métodos não paramétricos de análise de sobrevivência em estudos de fenologia de fruteiras, utilizando como exemplo, um estudo sobre o efeito de diferentes tipos de adubos minerais e orgânicos sobre aspectos fenológicos da bananeira.
Resumo:
Muitos experimentos tem sido analisados por métodos estatísticos inadequados. O uso não criterioso destes métodos, sem o devido cuidado ou sem considerar outras possibilidades, pode reduzir o valor das discussões, conclusões e o próprio valor da pesquisa. Há uma grande gama de tipos possíveis de abordagem estatística dos dados de pesquisa, cada qual atingindo uma finalidade. Por isso, o procedimento estatístico deve ser escolhido criteriosamente. Se o objetivo de um trabalho e estimar a magnitude de um efeito, então a analise usada deve estimá-la: não basta neste caso, apenas explicar qual dos resultados diferiram significativamente. Não obstante, se o objetivo da pesquisa é determinar um ponto, então a análise deve faze-lo. Neste caso, não e suficiente verificar somente o comportamento dos dados. A escolha de um modelo de regressão e uma ponderação na qual deve ser considerados, a adequação ao fenômeno estudado, o ajuste matemático obtido e a sua aplicabilidade. As propriedades do modelo escolhido devem ser justificáveis, tanto logicamente quanto biologicamente. Portanto, a análise deve ser sensata, lógica e apropriada as questões que procura-se responder.
Resumo:
Anexos: p. 154-188
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
The reconfiguration of a distribution network is a change in its topology, aiming to provide specific operation conditions of the network, by changing the status of its switches. It can be performed regardless of any system anomaly. The service restoration is a particular case of reconfiguration and should be performed whenever there is a network failure or whenever one or more sections of a feeder have been taken out of service for maintenance. In such cases, loads that are supplied through lines sections that are downstream of portions removed for maintenance may be supplied by the closing of switches to the others feeders. By classical methods of reconfiguration, several switches may be required beyond those used to perform the restoration service. This includes switching feeders in the same substation or for substations that do not have any direct connection to the faulted feeder. These operations can cause discomfort, losses and dissatisfaction among consumers, as well as a negative reputation for the energy company. The purpose of this thesis is to develop a heuristic for reconfiguration of a distribution network, upon the occurrence of a failure in this network, making the switching only for feeders directly involved in this specific failed segment, considering that the switching applied is related exclusively to the isolation of failed sections and bars, as well as to supply electricity to the islands generated by the condition, with significant reduction in the number of applications of load flows, due to the use of sensitivity parameters for determining voltages and currents estimated on bars and lines of the feeders directly involved with that failed segment. A comparison between this process and classical methods is performed for different test networks from the literature about networks reconfiguration
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Durante la actividad diaria, la sociedad actual interactúa constantemente por medio de dispositivos electrónicos y servicios de telecomunicaciones, tales como el teléfono, correo electrónico, transacciones bancarias o redes sociales de Internet. Sin saberlo, masivamente dejamos rastros de nuestra actividad en las bases de datos de empresas proveedoras de servicios. Estas nuevas fuentes de datos tienen las dimensiones necesarias para que se puedan observar patrones de comportamiento humano a grandes escalas. Como resultado, ha surgido una reciente explosión sin precedentes de estudios de sistemas sociales, dirigidos por el análisis de datos y procesos computacionales. En esta tesis desarrollamos métodos computacionales y matemáticos para analizar sistemas sociales por medio del estudio combinado de datos derivados de la actividad humana y la teoría de redes complejas. Nuestro objetivo es caracterizar y entender los sistemas emergentes de interacciones sociales en los nuevos espacios tecnológicos, tales como la red social Twitter y la telefonía móvil. Analizamos los sistemas por medio de la construcción de redes complejas y series temporales, estudiando su estructura, funcionamiento y evolución en el tiempo. También, investigamos la naturaleza de los patrones observados por medio de los mecanismos que rigen las interacciones entre individuos, así como medimos el impacto de eventos críticos en el comportamiento del sistema. Para ello, hemos propuesto modelos que explican las estructuras globales y la dinámica emergente con que fluye la información en el sistema. Para los estudios de la red social Twitter, hemos basado nuestros análisis en conversaciones puntuales, tales como protestas políticas, grandes acontecimientos o procesos electorales. A partir de los mensajes de las conversaciones, identificamos a los usuarios que participan y construimos redes de interacciones entre los mismos. Específicamente, construimos una red para representar quién recibe los mensajes de quién y otra red para representar quién propaga los mensajes de quién. En general, hemos encontrado que estas estructuras tienen propiedades complejas, tales como crecimiento explosivo y distribuciones de grado libres de escala. En base a la topología de estas redes, hemos indentificado tres tipos de usuarios que determinan el flujo de información según su actividad e influencia. Para medir la influencia de los usuarios en las conversaciones, hemos introducido una nueva medida llamada eficiencia de usuario. La eficiencia se define como el número de retransmisiones obtenidas por mensaje enviado, y mide los efectos que tienen los esfuerzos individuales sobre la reacción colectiva. Hemos observado que la distribución de esta propiedad es ubicua en varias conversaciones de Twitter, sin importar sus dimensiones ni contextos. Con lo cual, sugerimos que existe universalidad en la relación entre esfuerzos individuales y reacciones colectivas en Twitter. Para explicar los factores que determinan la emergencia de la distribución de eficiencia, hemos desarrollado un modelo computacional que simula la propagación de mensajes en la red social de Twitter, basado en el mecanismo de cascadas independientes. Este modelo nos permite medir el efecto que tienen sobre la distribución de eficiencia, tanto la topología de la red social subyacente, como la forma en que los usuarios envían mensajes. Los resultados indican que la emergencia de un grupo selecto de usuarios altamente eficientes depende de la heterogeneidad de la red subyacente y no del comportamiento individual. Por otro lado, hemos desarrollado técnicas para inferir el grado de polarización política en redes sociales. Proponemos una metodología para estimar opiniones en redes sociales y medir el grado de polarización en las opiniones obtenidas. Hemos diseñado un modelo donde estudiamos el efecto que tiene la opinión de un pequeño grupo de usuarios influyentes, llamado élite, sobre las opiniones de la mayoría de usuarios. El modelo da como resultado una distribución de opiniones sobre la cual medimos el grado de polarización. Aplicamos nuestra metodología para medir la polarización en redes de difusión de mensajes, durante una conversación en Twitter de una sociedad políticamente polarizada. Los resultados obtenidos presentan una alta correspondencia con los datos offline. Con este estudio, hemos demostrado que la metodología propuesta es capaz de determinar diferentes grados de polarización dependiendo de la estructura de la red. Finalmente, hemos estudiado el comportamiento humano a partir de datos de telefonía móvil. Por una parte, hemos caracterizado el impacto que tienen desastres naturales, como innundaciones, sobre el comportamiento colectivo. Encontramos que los patrones de comunicación se alteran de forma abrupta en las áreas afectadas por la catástofre. Con lo cual, demostramos que se podría medir el impacto en la región casi en tiempo real y sin necesidad de desplegar esfuerzos en el terreno. Por otra parte, hemos estudiado los patrones de actividad y movilidad humana para caracterizar las interacciones entre regiones de un país en desarrollo. Encontramos que las redes de llamadas y trayectorias humanas tienen estructuras de comunidades asociadas a regiones y centros urbanos. En resumen, hemos mostrado que es posible entender procesos sociales complejos por medio del análisis de datos de actividad humana y la teoría de redes complejas. A lo largo de la tesis, hemos comprobado que fenómenos sociales como la influencia, polarización política o reacción a eventos críticos quedan reflejados en los patrones estructurales y dinámicos que presentan la redes construidas a partir de datos de conversaciones en redes sociales de Internet o telefonía móvil. ABSTRACT During daily routines, we are constantly interacting with electronic devices and telecommunication services. Unconsciously, we are massively leaving traces of our activity in the service providers’ databases. These new data sources have the dimensions required to enable the observation of human behavioral patterns at large scales. As a result, there has been an unprecedented explosion of data-driven social research. In this thesis, we develop computational and mathematical methods to analyze social systems by means of the combined study of human activity data and the theory of complex networks. Our goal is to characterize and understand the emergent systems from human interactions on the new technological spaces, such as the online social network Twitter and mobile phones. We analyze systems by means of the construction of complex networks and temporal series, studying their structure, functioning and temporal evolution. We also investigate on the nature of the observed patterns, by means of the mechanisms that rule the interactions among individuals, as well as on the impact of critical events on the system’s behavior. For this purpose, we have proposed models that explain the global structures and the emergent dynamics of information flow in the system. In the studies of the online social network Twitter, we have based our analysis on specific conversations, such as political protests, important announcements and electoral processes. From the messages related to the conversations, we identify the participant users and build networks of interactions with them. We specifically build one network to represent whoreceives- whose-messages and another to represent who-propagates-whose-messages. In general, we have found that these structures have complex properties, such as explosive growth and scale-free degree distributions. Based on the topological properties of these networks, we have identified three types of user behavior that determine the information flow dynamics due to their influence. In order to measure the users’ influence on the conversations, we have introduced a new measure called user efficiency. It is defined as the number of retransmissions obtained by message posted, and it measures the effects of the individual activity on the collective reacixtions. We have observed that the probability distribution of this property is ubiquitous across several Twitter conversation, regardlessly of their dimension or social context. Therefore, we suggest that there is a universal behavior in the relationship between individual efforts and collective reactions on Twitter. In order to explain the different factors that determine the user efficiency distribution, we have developed a computational model to simulate the diffusion of messages on Twitter, based on the mechanism of independent cascades. This model, allows us to measure the impact on the emergent efficiency distribution of the underlying network topology, as well as the way that users post messages. The results indicate that the emergence of an exclusive group of highly efficient users depends upon the heterogeneity of the underlying network instead of the individual behavior. Moreover, we have also developed techniques to infer the degree of polarization in social networks. We propose a methodology to estimate opinions in social networks and to measure the degree of polarization in the obtained opinions. We have designed a model to study the effects of the opinions of a small group of influential users, called elite, on the opinions of the majority of users. The model results in an opinions distribution to which we measure the degree of polarization. We apply our methodology to measure the polarization on graphs from the messages diffusion process, during a conversation on Twitter from a polarized society. The results are in very good agreement with offline and contextual data. With this study, we have shown that our methodology is capable of detecting several degrees of polarization depending on the structure of the networks. Finally, we have also inferred the human behavior from mobile phones’ data. On the one hand, we have characterized the impact of natural disasters, like flooding, on the collective behavior. We found that the communication patterns are abruptly altered in the areas affected by the catastrophe. Therefore, we demonstrate that we could measure the impact of the disaster on the region, almost in real-time and without needing to deploy further efforts. On the other hand, we have studied human activity and mobility patterns in order to characterize regional interactions on a developing country. We found that the calls and trajectories networks present community structure associated to regional and urban areas. In summary, we have shown that it is possible to understand complex social processes by means of analyzing human activity data and the theory of complex networks. Along the thesis, we have demonstrated that social phenomena, like influence, polarization and reaction to critical events, are reflected in the structural and dynamical patterns of the networks constructed from data regarding conversations on online social networks and mobile phones.
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
OBJETIVO: Verificar a influência da internet nas atividades acadêmico-científicas da comunidade brasileira que atua na área de saúde pública. MÉTODOS: Estudo descritivo, centrado na opinião de 237 docentes vinculados aos programas de pós-graduação em saúde pública, nos níveis mestrado e doutorado, no Brasil, em 2001. Para a obtenção dos dados, optou-se por questionário auto-aplicado via web e correio postal. A análise estatística foi feita por meio de proporções, médias e desvios-padrão. RESULTADOS: O uso da internet foi apontado por 94,9%(225) da comunidade, sendo o correio eletrônico (92,0%) e a web (55,6%) os recursos mais utilizados, diariamente. A influência da internet na comunicação entre os docentes, principalmente para o desenvolvimento de pesquisas em colaboração, foi significativa (73,8 %). Declararam não utilizar a internet 5,1% dos docentes, cujas justificativas apresentadas foram a falta de motivação, falta de tempo e facilidade de conseguir de seus colegas o material de que precisam. CONCLUSÕES: Os resultados mostraram que a internet influencia o trabalho dos docentes e afeta o ciclo da comunicação científica, principalmente na rapidez de recuperação de informações. Observou-se tendência em eleger a comunicação entre os docentes como a etapa que mais mudou desde o advento da internet no mundo acadêmico-científico brasileiro.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Electrónica e Telecomunicações
Resumo:
Tese de doutoramento em Ciências da Educação