6 resultados para Collected Works
em CUNY Academic Works
Graduate School and University Center Archives Finding Aid - Record Group II: Centers and Institutes
Resumo:
This is part of the finding aid to the Graduate School and University Center (GSUC) Archives, City University of New York. Record Group II is material collected from research centers and institutes at the GSUC.
Resumo:
I consider the case for genuinely anonymous web searching. Big data seems to have it in for privacy. The story is well known, particularly since the dawn of the web. Vastly more personal information, monumental and quotidian, is gathered than in the pre-digital days. Once gathered it can be aggregated and analyzed to produce rich portraits, which in turn permit unnerving prediction of our future behavior. The new information can then be shared widely, limiting prospects and threatening autonomy. How should we respond? Following Nissenbaum (2011) and Brunton and Nissenbaum (2011 and 2013), I will argue that the proposed solutions—consent, anonymity as conventionally practiced, corporate best practices, and law—fail to protect us against routine surveillance of our online behavior. Brunton and Nissenbaum rightly maintain that, given the power imbalance between data holders and data subjects, obfuscation of one’s online activities is justified. Obfuscation works by generating “misleading, false, or ambiguous data with the intention of confusing an adversary or simply adding to the time or cost of separating good data from bad,” thus decreasing the value of the data collected (Brunton and Nissenbaum, 2011). The phenomenon is as old as the hills. Natural selection evidently blundered upon the tactic long ago. Take a savory butterfly whose markings mimic those of a toxic cousin. From the point of view of a would-be predator the data conveyed by the pattern is ambiguous. Is the bug lunch or potential last meal? In the light of the steep costs of a mistake, the savvy predator goes hungry. Online obfuscation works similarly, attempting for instance to disguise the surfer’s identity (Tor) or the nature of her queries (Howe and Nissenbaum 2009). Yet online obfuscation comes with significant social costs. First, it implies free riding. If I’ve installed an effective obfuscating program, I’m enjoying the benefits of an apparently free internet without paying the costs of surveillance, which are shifted entirely onto non-obfuscators. Second, it permits sketchy actors, from child pornographers to fraudsters, to operate with near impunity. Third, online merchants could plausibly claim that, when we shop online, surveillance is the price we pay for convenience. If we don’t like it, we should take our business to the local brick-and-mortar and pay with cash. Brunton and Nissenbaum have not fully addressed the last two costs. Nevertheless, I think the strict defender of online anonymity can meet these objections. Regarding the third, the future doesn’t bode well for offline shopping. Consider music and books. Intrepid shoppers can still find most of what they want in a book or record store. Soon, though, this will probably not be the case. And then there are those who, for perfectly good reasons, are sensitive about doing some of their shopping in person, perhaps because of their weight or sexual tastes. I argue that consumers should not have to pay the price of surveillance every time they want to buy that catchy new hit, that New York Times bestseller, or a sex toy.
Resumo:
Each year search engines like Google, Bing and Yahoo, complete trillions of search queries online. Students are especially dependent on these search tools because of their popularity, convenience and accessibility. However, what students are unaware of, by choice or naiveté is the amount of personal information that is collected during each search session, how that data is used and who is interested in their online behavior profile. Privacy policies are frequently updated in favor of the search companies but are lengthy and often are perused briefly or ignored entirely with little thought about how personal web habits are being exploited for analytics and marketing. As an Information Literacy instructor, and a member of the Electronic Frontier Foundation, I believe in the importance of educating college students and web users in general that they have a right to privacy online. Class discussions on the topic of web privacy have yielded an interesting perspective on internet search usage. Students are unaware of how their online behavior is recorded and have consistently expressed their hesitancy to use tools that disguise or delete their IP address because of the stigma that it may imply they have something to hide or are engaging in illegal activity. Additionally, students fear they will have to surrender the convenience of uber connectivity in their applications to maintain their privacy. The purpose of this lightning presentation is to provide educators with a lesson plan highlighting and simplifying the privacy terms for the three major search engines, Google, Bing and Yahoo. This presentation focuses on what data these search engines collect about users, how that data is used and alternative search solutions, like DuckDuckGo, for increased privacy. Students will directly benefit from this lesson because informed internet users can protect their data, feel safer online and become more effective web searchers.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Interoperability of water quality data depends on the use of common models, schemas and vocabularies. However, terms are usually collected during different activities and projects in isolation of one another, resulting in vocabularies that have the same scope being represented with different terms, using different formats and formalisms, and published in various access methods. Significantly, most water quality vocabularies conflate multiple concepts in a single term, e.g. quantity kind, units of measure, substance or taxon, medium and procedure. This bundles information associated with separate elements from the OGC Observations and Measurements (O&M) model into a single slot. We have developed a water quality vocabulary, formalized using RDF, and published as Linked Data. The terms were extracted from existing water quality vocabularies. The observable property model is inspired by O&M but aligned with existing ontologies. The core is an OWL ontology that extends the QUDT ontology for Unit and QuantityKind definitions. We add classes to generalize the QuantityKind model, and properties for explicit description of the conflated concepts. The key elements are defined to be sub-classes or sub-properties of SKOS elements, which enables a SKOS view to be published through standard vocabulary APIs, alongside the full view. QUDT terms are re-used where possible, supplemented with additional Unit and QuantityKind entries required for water quality. Along with items from separate vocabularies developed for objects, media, and procedures, these are linked into definitions in the actual observable property vocabulary. Definitions of objects related to chemical substances are linked to items from the Chemical Entities of Biological Interest (ChEBI) ontology. Mappings to other vocabularies, such as DBPedia, are in separately maintained files. By formalizing the model for observable properties, and clearly labelling the separate concerns, water quality observations from different sources may be more easily merged and also transformed to O&M for cross-domain applications.
Resumo:
The objective of this study is to develop a Pollution Early Warning System (PEWS) for efficient management of water quality in oyster harvesting areas. To that end, this paper presents a web-enabled, user-friendly PEWS for managing water quality in oyster harvesting areas along Louisiana Gulf Coast, USA. The PEWS consists of (1) an Integrated Space-Ground Sensing System (ISGSS) gathering data for environmental factors influencing water quality, (2) an Artificial Neural Network (ANN) model for predicting the level of fecal coliform bacteria, and (3) a web-enabled, user-friendly Geographic Information System (GIS) platform for issuing water pollution advisories and managing oyster harvesting waters. The ISGSS (data acquisition system) collects near real-time environmental data from various sources, including NASA MODIS Terra and Aqua satellites and in-situ sensing stations managed by the USGS and the NOAA. The ANN model is developed using the ANN program in MATLAB Toolbox. The ANN model involves a total of 6 independent environmental variables, including rainfall, tide, wind, salinity, temperature, and weather type along with 8 different combinations of the independent variables. The ANN model is constructed and tested using environmental and bacteriological data collected monthly from 2001 – 2011 by Louisiana Molluscan Shellfish Program at seven oyster harvesting areas in Louisiana Coast, USA. The ANN model is capable of explaining about 76% of variation in fecal coliform levels for model training data and 44% for independent data. The web-based GIS platform is developed using ArcView GIS and ArcIMS. The web-based GIS system can be employed for mapping fecal coliform levels, predicted by the ANN model, and potential risks of norovirus outbreaks in oyster harvesting waters. The PEWS is able to inform decision-makers of potential risks of fecal pollution and virus outbreak on a daily basis, greatly reducing the risk of contaminated oysters to human health.