852 resultados para Internet-centric Systems in Hydroinformatics
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
Information Centric Networking (ICN) as an emerging paradigm for the Future Internet has initially been rather focusing on bandwidth savings in wired networks, but there might also be some significant potential to support communication in mobile wireless networks as well as opportunistic network scenarios, where end systems have spontaneous but time-limited contact to exchange data. This chapter addresses the reasoning why ICN has an important role in mobile and opportunistic networks by identifying several challenges in mobile and opportunistic Information-Centric Networks and discussing appropriate solutions for them. In particular, it discusses the issues of receiver and source mobility. Source mobility needs special attention. Solutions based on routing protocol extensions, indirection, and separation of name resolution and data transfer are discussed. Moreover, the chapter presents solutions for problems in opportunistic Information-Centric Networks. Among those are mechanisms for efficient content discovery in neighbour nodes, resume mechanisms to recover from intermittent connectivity disruptions, a novel agent delegation mechanisms to offload content discovery and delivery to mobile agent nodes, and the exploitation of overhearing to populate routing tables of mobile nodes. Some preliminary performance evaluation results of these developed mechanisms are provided.
Resumo:
Despite the potential for e-commerce growth in Latin America, studies investigating factors that influence consumers’ Internet purchasing behavior are very limited. This research addresses this limitation with a consumer centric study in Chile using the Theory of Reasoned Action. The study examines Chilean consumers’ beliefs, perceptions of risk, and subjective norms about continued purchasing on the Internet. Findings show that consumers’ attitude towards purchasing on the Internet is an influential factor on intentions to continue Internet purchasing. Additionally, compatibility and result demonstrability are influential factors on attitudes towards this behavior. The study contributes to the important area of technology post adoption behavior.
Resumo:
This article argues that the concept of national media systems, and the comparative study of media systems, institutions, and practices, retains relevance in an era of media globalization and technological convergence. It considers various critiques of ‘media systems’ theories, such as those which view the concept of ‘system’ as a legacy of an outdated positivism and those which argue that the media globalization is weakening the relevance of nation-states in structuring the field of media cultures and practices. It argues for the continuing centrality of nation-states to media processes, and the ongoing significance of the national space in an age of media globalization, with reference to case studies of Internet policies in China, Brazil, and Australia. These studies indicate that nation-states remain critical actors in media governance and that domestic actors largely shape the central dynamics of media policies, even where media technologies and platforms enable global flows of media content.
Resumo:
This is the fourth TAProViz workshop being run at the 13th International Conference on Business Process Management (BPM). The intention this year is to consolidate on the results of the previous successful workshops by further developing this important topic, identifying the key research topics of interest to the BPM visualization community. Towards this goal, the workshop topics were extended to human computer interaction and related domains. Submitted papers were evaluated by at least three program committee members, in a double blind manner, on the basis of significance, originality, technical quality and exposition. Three full and one position papers were accepted for presentation at the workshop. In addition, we invited a keynote speaker, Jakob Pinggera, a postdoctoral researcher at the Business Process Management Research Cluster at the University of Innsbruck, Austria.
Resumo:
The use of Laptops and the Internet has produced the technological conditions for instructors and students can take advantage from the diversity of online information, communication, collaboration and sharing with others. The integration of Internet services in the teaching practices can be responsible for thematic, social and digital improvement for the agents involved. There are many benefits when we use a Learning Management Systems (LMS) such as Moodle, to support the lectures in higher education. We also will consider its implications for student support and online interaction, leading educational agents to a collaborating of different learning environments, where they can combine face-to-face instruction with computer-mediated instruction, blended-learning, and increases the possibilities for better quality and quantity of human communication in a learning background. In general components of learning management systems contain synchronous and asynchronous communication tools, management features, and assessment utilities. These assessment utilities allow lecturers to systematize basic assessment tasks. Assessments can be straightaway delivered to the student, and upon conclusion, immediately returned with grades and detailed feedback. Therefore learning management systems can also be used for assessment purposes in Higher Education.
Resumo:
In this paper a look is taken at how the use of implant technology can be used to either increase the range of the abilities of a human and/or diminish the effects of a neural illness, such as Parkinson's Disease. The key element is the need for a clear interface linking the human brain directly with a computer. The area of interest here is the use of implant technology, particularly where a connection is made between technology and the human brain and/or nervous system. Pilot tests and experimentation are invariably carried out apriori to investigate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies are discussed here. The paper goes on to describe human experimentation, in particular that carried out by the author himself, which led to him receiving a neural implant which linked his nervous system bi-directionally with the internet. With this in place neural signals were transmitted to various technological devices to directly control them. In particular, feedback to the brain was obtained from the fingertips of a robot hand and ultrasonic (extra) sensory input. A view is taken as to the prospects for the future, both in the near term as a therapeutic device and in the long term as a form of enhancement.
Resumo:
As a highly urbanized and flood prone region, Flanders has experienced multiple floods causing significant damage in the past. In response to the floods of 1998 and 2002 the Flemish Environment Agency, responsible for managing 1 400 km of unnavigable rivers, started setting up a real time flood forecasting system in 2003. Currently the system covers almost 2 000 km of unnavigable rivers, for which flood forecasts are accessible online (www.waterinfo.be). The forecasting system comprises more than 1 000 hydrologic and 50 hydrodynamic models which are supplied with radar rainfall, rainfall forecasts and on-site observations. Forecasts for the next 2 days are generated hourly, while 10 day forecasts are generated twice a day. Additionally, twice daily simulations based on percentile rainfall forecasts (from EPS predictions) result in uncertainty bands for the latter. Subsequent flood forecasts use the most recent rainfall predictions and observed parameters at any time while uncertainty on the longer-term is taken into account. The flood forecasting system produces high resolution dynamic flood maps and graphs at about 200 river gauges and more than 3 000 forecast points. A customized emergency response system generates phone calls and text messages to a team of hydrologists initiating a pro-active response to prevent upcoming flood damage. The flood forecasting system of the Flemish Environment Agency is constantly evolving and has proven to be an indispensable tool in flood crisis management. This was clearly the case during the November 2010 floods, when the agency issued a press release 2 days in advance allowing water managers, emergency services and civilians to take measures.
Resumo:
Includes bibliography
Resumo:
Internet access by wireless networks has grown considerably in recent years. However, these networks are vulnerable to security problems, especially those related to denial of service attacks. Intrusion Detection Systems(IDS)are widely used to improve network security, but comparison among the several existing approaches is not a trivial task. This paper proposes building a datasetfor evaluating IDS in wireless environments. The data were captured in a real, operating network. We conducted tests using traditional IDS and achieved great results, which showed the effectiveness of our proposed approach.
Resumo:
Technology advances in hardware, software and IP-networks such as the Internet or peer-to-peer file sharing systems are threatening the music business. The result has been an increasing amount of illegal copies available on-line as well as off-line. With the emergence of digital rights management systems (DRMS), the music industry seems to have found the appropriate tool to simultaneously fight piracy and to monetize their assets. Although these systems are very powerful and include multiple technologies to prevent piracy, it is as of yet unknown to what extent such systems are currently being used by content providers. We provide empirical analyses, results, and conclusions related to digital rights management systems and the protection of digital content in the music industry. It shows that most content providers are protecting their digital content through a variety of technologies such as passwords or encryption. However, each protection technology has its own specific goal, and not all prevent piracy. The majority of the respondents are satisfied with their current protection but want to reinforce it for the future, due to fear of increasing piracy. Surprisingly, although encryption is seen as the core DRM technology, only few companies are currently using it. Finally, half of the respondents do not believe in the success of DRMS and their ability to reduce piracy.
Resumo:
Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks. In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions. If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store.
Resumo:
La informática se está convirtiendo en la quinta utilidad (gas, agua, luz, teléfono) en parte debido al impacto de Cloud Computing en las mayorías de las organizaciones. Este uso de informática es usada por cada vez más tipos de sistemas, incluidos Sistemas Críticos. Esto tiene un impacto en la complejidad internad y la fiabilidad de los sistemas de la organización y los que se ofrecen a los clientes. Este trabajo investiga el uso de Cloud Computing por sistemas críticos, centrándose en las dependencias y especialmente en la fiabilidad de estos sistemas. Se han presentado algunos ejemplos de su uso, y aunque su utilización en sistemas críticos no está extendido, se presenta cual puede llegar a ser su impacto. El objetivo de este trabajo es primero definir un modelo que pueda representar de una forma cuantitativa las interdependencias en fiabilidad y interdependencia para las organizaciones que utilicen estos sistemas, y aplicar este modelo en un sistema crítico del campo de sanidad y mostrar sus resultados. Los conceptos de “macro-dependability” y “micro-dependability” son introducidos en el modelo para la definición de interdependencia y para analizar la fiabilidad de sistemas que dependen de otros sistemas. ABSTRACT With the increasing utilization of Internet services and cloud computing by most organizations (both private and public), it is clear that computing is becoming the 5th utility (along with water, electricity, telephony and gas). These technologies are used for almost all types of systems, and the number is increasing, including Critical Infrastructure systems. Even if Critical Infrastructure systems appear not to rely directly on cloud services, there may be hidden inter-dependencies. This is true even for private cloud computing, which seems more secure and reliable. The critical systems can began in some cases with a clear and simple design, but evolved as described by Egan to "rafted" networks. Because they are usually controlled by one or few organizations, even when they are complex systems, their dependencies can be understood. The organization oversees and manages changes. These CI systems have been affected by the introduction of new ICT models like global communications, PCs and the Internet. Even virtualization took more time to be adopted by Critical systems, due to their strategic nature, but once that these technologies have been proven in other areas, at the end they are adopted as well, for different reasons such as costs. A new technology model is happening now based on some previous technologies (virtualization, distributing and utility computing, web and software services) that are offered in new ways and is called cloud computing. The organizations are migrating more services to the cloud; this will have impact in their internal complexity and in the reliability of the systems they are offering to the organization itself and their clients. Not always this added complexity and associated risks to their reliability are seen. As well, when two or more CI systems are interacting, the risks of one can affect the rest, sharing the risks. This work investigates the use of cloud computing by critical systems, and is focused in the dependencies and reliability of these systems. Some examples are presented together with the associated risks. A framework is introduced for analysing the dependability and resilience of a system that relies on cloud services and how to improve them. As part of the framework, the concepts of micro and macro dependability are introduced to explain the internal and external dependability on services supplied by an external cloud. A pharmacovigilance model system has been used for framework validation.
Resumo:
Urban Mass Transportation Administration, Washington, D.C.