4 resultados para personal information management model

em CUNY Academic Works


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, two international standard organizations, ISO and OGC, have done the work of standardization for GIS. Current standardization work for providing interoperability among GIS DB focuses on the design of open interfaces. But, this work has not considered procedures and methods for designing river geospatial data. Eventually, river geospatial data has its own model. When we share the data by open interface among heterogeneous GIS DB, differences between models result in the loss of information. In this study a plan was suggested both to respond to these changes in the information envirnment and to provide a future Smart River-based river information service by understanding the current state of river geospatial data model, improving, redesigning the database. Therefore, primary and foreign key, which can distinguish attribute information and entity linkages, were redefined to increase the usability. Database construction of attribute information and entity relationship diagram have been newly redefined to redesign linkages among tables from the perspective of a river standard database. In addition, this study was undertaken to expand the current supplier-oriented operating system to a demand-oriented operating system by establishing an efficient management of river-related information and a utilization system, capable of adapting to the changes of a river management paradigm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for large-scale systems. Nonetheless, a critical obstacle, which needs to be overcome in MPC, is the large computational burden when a large-scale system is considered or a long prediction horizon is involved. In order to solve this problem, we use an adaptive prediction accuracy (APA) approach that can reduce the computational burden almost by half. The proposed MPC scheme with this scheme is tested on the northern Dutch water system, which comprises Lake IJssel, Lake Marker, the River IJssel and the North Sea Canal. The simulation results show that by using the MPC-APA scheme, the computational time can be reduced to a large extent and a flood protection problem over longer prediction horizons can be well solved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent decades, library associations have advocated for the adoption of privacy and confidentiality policies as practical support to the Library Code of Ethics with a threefold purpose to (1) define and uphold privacy practices within the library, (2) convey privacy practices to patrons and, (3) protect against potential liability and public relations problems. The adoption of such policies has been instrumental in providing libraries with effective responses to surveillance initiatives such as warrantless requests and the USA PATRIOT ACT. Nevertheless, as reflected in recent news stories, the rapid emergence of data brokerage relationships and technologies and the increasing need for libraries to utilize third party vendor services have increased opportunities for data surveillers to access patrons’ personal information and reading habits, which are funneled and made available through multiple online library service platforms. Additionally, the advice that libraries should “contract for the same level of privacy reflected in their privacy policies” is no longer realistic given that the existence of multiple vendor contracts negotiated at arms length is likely to produce varying privacy terms and even varying definitions of what constitutes personal information (PII). These conditions sharply threaten the effectiveness and relevance of library privacy policies and privacy initiatives in that such policies increasingly offer false comfort by failing to reflect privacy weaknesses in the data sharing landscape and vendor contracts when library-vendor contracts fail to keep up with vendor data sharing capabilities. While some argue that library privacy ethics are antiquated and rendered obscure in the current online sharing economy PEW studies point to pronounced public discomfort with increasing privacy erosion. At the same time, new directions in FTC enforcement raise the possibility that public institutions’ privacy policies may serve as swords to unfair or deceptive commercial trade practices – offering the potential of renewed relevance for library privacy and confidentiality policies. This dual coin of public concern and the potential for enhanced FTC enforcement suggests that when crafting privacy polices libraries must now walk the knife’s edge by offering patrons both realistic notice about the limitations of protections the library can ensure while at the same time publicly holding vendors accountable to library privacy ethics and expectations. Potential solutions for how to walk this edge are developed and offered as a subject for further discussion to assist the modification of model policies for both public and academic libraries alike.