931 resultados para Computer and Information Sciences


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecosystem engineers that increase habitat complexity are keystone species in marine systems, increasing shelter and niche availability, and therefore biodiversity. For example, kelp holdfasts form intricate structures and host the largest number of organisms in kelp ecosystems. However, methods that quantify 3D habitat complexity have only seldom been used in marine habitats, and never in kelp holdfast communities. This study investigated the role of kelp holdfasts (Laminaria hyperborea) in supporting benthic faunal biodiversity. Computer-aided tomography (CT-) scanning was used to quantify the three-dimensional geometrical complexity of holdfasts, including volume, surface area and surface fractal dimension (FD). Additionally, the number of haptera, number of haptera per unit of volume, and age of kelps were estimated. These measurements were compared to faunal biodiversity and community structure, using partial least-squares regression and multivariate ordination. Holdfast volume explained most of the variance observed in biodiversity indices, however all other complexity measures also strongly contributed to the variance observed. Multivariate ordinations further revealed that surface area and haptera per unit of volume accounted for the patterns observed in faunal community structure. Using 3D image analysis, this study makes a strong contribution to elucidate quantitative mechanisms underlying the observed relationship between biodiversity and habitat complexity. Furthermore, the potential of CT-scanning as an ecological tool is demonstrated, and a methodology for its use in future similar studies is established. Such spatially resolved imager analysis could help identify structurally complex areas as biodiversity hotspots, and may support the prioritization of areas for conservation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecosystem engineers that increase habitat complexity are keystone species in marine systems, increasing shelter and niche availability, and therefore biodiversity. For example, kelp holdfasts form intricate structures and host the largest number of organisms in kelp ecosystems. However, methods that quantify 3D habitat complexity have only seldom been used in marine habitats, and never in kelp holdfast communities. This study investigated the role of kelp holdfasts (Laminaria hyperborea) in supporting benthic faunal biodiversity. Computer-aided tomography (CT-) scanning was used to quantify the three-dimensional geometrical complexity of holdfasts, including volume, surface area and surface fractal dimension (FD). Additionally, the number of haptera, number of haptera per unit of volume, and age of kelps were estimated. These measurements were compared to faunal biodiversity and community structure, using partial least-squares regression and multivariate ordination. Holdfast volume explained most of the variance observed in biodiversity indices, however all other complexity measures also strongly contributed to the variance observed. Multivariate ordinations further revealed that surface area and haptera per unit of volume accounted for the patterns observed in faunal community structure. Using 3D image analysis, this study makes a strong contribution to elucidate quantitative mechanisms underlying the observed relationship between biodiversity and habitat complexity. Furthermore, the potential of CT-scanning as an ecological tool is demonstrated, and a methodology for its use in future similar studies is established. Such spatially resolved imager analysis could help identify structurally complex areas as biodiversity hotspots, and may support the prioritization of areas for conservation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Published in Electronic Handling of Information: Testing and Evaluation, Kent, Taubee, Beltzer, and Goldstein (ed.), Academic Press, London(1967), pp. 123–147.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Problem This dissertation presents a literature-based framework for communication in science (with the elements partners, purposes, message, and channel), which it then applies in and amends through an empirical study of how geoscientists use two social computing technologies (SCTs), blogging and Twitter (both general use and tweeting from conferences). How are these technologies used and what value do scientists derive from them? Method The empirical part used a two-pronged qualitative study, using (1) purposive samples of ~400 blog posts and ~1000 tweets and (2) a purposive sample of 8 geoscientist interviews. Blog posts, tweets, and interviews were coded using the framework, adding new codes as needed. The results were aggregated into 8 geoscientist case studies, and general patterns were derived through cross-case analysis. Results A detailed picture of how geoscientists use blogs and twitter emerged, including a number of new functions not served by traditional channels. Some highlights: Geoscientists use SCTs for communication among themselves as well as with the public. Blogs serve persuasion and personal knowledge management; Twitter often amplifies the signal of traditional communications such as journal articles. Blogs include tutorials for peers, reviews of basic science concepts, and book reviews. Twitter includes links to readings, requests for assistance, and discussions of politics and religion. Twitter at conferences provides live coverage of sessions. Conclusions Both blogs and Twitter are routine parts of scientists' communication toolbox, blogs for in-depth, well-prepared essays, Twitter for faster and broader interactions. Both have important roles in supporting community building, mentoring, and learning and teaching. The Framework of Communication in Science was a useful tool in studying these two SCTs in this domain. The results should encourage science administrators to facilitate SCT use of scientists in their organization and information providers to search SCT documents as an important source of information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extreme natural events, like e.g. tsunamis or earthquakes, regularly lead to catastrophes with dramatic consequences. In recent years natural disasters caused hundreds of thousands of deaths, destruction of infrastructure, disruption of economic activity and loss of billions of dollars worth of property and thus revealed considerable deficits hindering their effective management: Needs for stakeholders, decision-makers as well as for persons concerned include systematic risk identification and evaluation, a way to assess countermeasures, awareness raising and decision support systems to be employed before, during and after crisis situations. The overall goal of this study focuses on interdisciplinary integration of various scientific disciplines to contribute to a tsunami early warning information system. In comparison to most studies our focus is on high-end geometric and thematic analysis to meet the requirements of smallscale, heterogeneous and complex coastal urban systems. Data, methods and results from engineering, remote sensing and social sciences are interlinked and provide comprehensive information for disaster risk assessment, management and reduction. In detail, we combine inundation modeling, urban morphology analysis, population assessment, socioeconomic analysis of the population and evacuation modeling. The interdisciplinary results eventually lead to recommendations for mitigation strategies in the fields of spatial planning or coping capacity. © Author(s) 2009.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 4: Transition Towards Product-Service Systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Means to automate the fact replace the man in their job functions for a man and machines automatic mechanism, ie documentary specialists in computer and computers are the cornerstone of any modern system of documentation and information. From this point of view immediately raises the problem of deciding what resources should be applied to solve the specific problem in each specific case. We will not let alone to propose quick fixes or recipes in order to decide what to do in any case. The solution depends on repeat for each particular problem. What we want is to move some points that can serve as a basis for reflection to help find the best solution possible, once the problem is defined correctly. The first thing to do before starting any automated system project is to define exactly the domain you want to cover and assess with greater precision possible importance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large percentage of Vanier College's technology students do not attain their College degrees within the scheduled three years of their program. A closer investigation of the problem revealed that in many of these cases these students had completed all of their program professional courses but they had not completed all of the required English and/or Humanities courses. Fortunately, most of these students do extend their stay at the college for the one or more semesters required for graduation, although some choose to go on into the workforce without returning to complete the missing English and/or Humanities and without their College Degrees. The purpose of this research was to discover if there was any significant measure of association between a student's family linguistic background, family cultural background, high school average, and/or College English Placement Test results and his or her likelihood of succeeding in his or her English and/or Humanities courses within the scheduled three years of the program. Because of both demographic differences between 'hard' and 'soft' technologies, including student population, more specifically gender ratios and student average ages in specific programs; and program differences, including program writing requirements and types of practical skill activities required; in order to have a more uniform sample, the research was limited to the hard technologies where students work hands-on with hardware and/or computers and tend to have overall low research and writing requirements. Based on a review of current literature and observations made in one of the hard technology programs at Vanier College, eight research questions were developed. These questions were designed to examine different aspects of success in the English and Humanities courses such as failure and completion rates and the number of courses remaining after the end of the fifth semester and as well examine how the students assessed their ability to communicate in English. The eight research questions were broken down into a total of 54 hypotheses. The high number of hypotheses was required to address a total of seven independent variables: primary home language, high school language of instruction, student's place of birth (Canada, Not-Canada), student's parents' place of birth (Both-born-in-Canada, Not-both-born-in-Canada), high school averages and English placement level (as a result of the College English Entry Test); and eleven dependent variables: number of English completed, number of English failed, whether all English were completed by the end of the 5th semester (yes, no), number of Humanities courses completed, number of Humanities courses failed, whether all the Humanities courses were completed by the end of the 5th semester (yes, no), the total number of English and Humanities courses left, and the students' assessments of their ability to speak, read and write in English. The data required to address the hypotheses were collected from two sources, from the students themselves and from the College. Fifth and sixth semester students from Building Engineering Systems, Computer and Digital Systems, Computer Science and Industrial Electronics Technology Programs were surveyed to collect personal information including family cultural and linguistic history and current language usages, high school language of instruction, perceived fluency in speaking, reading and writing in English and perceived difficulty in completing English and Humanities courses. The College was able to provide current academic information on each of the students, including copies of college program planners and transcripts, and high school transcripts for students who attended a high school in Quebec. Quantitative analyses were done on the data using the SPSS statistical analysis program. Of the fifty-four hypotheses analysed, in fourteen cases the results supported the research hypotheses, in the forty other cases the null hypotheses had to be accepted. One of the findings was that there was a strong significant association between a student's primary home language and place of birth and his or her perception of his or her ability to communicate in English (speak, read, and write) signifying that both students whose primary home language was not English and students who were not born in Canada, considered themselves, on average, to be weaker in these skills than did students whose primary home language was English. Although this finding was noteworthy, the two most significant findings were the association found between a student's English entry placement level and the number of English courses failed and the association between the parents' place of birth and the student's likelihood of succeeding in both his or her English and Humanities courses. According to the research results, the mean number of English courses failed, on average, by students placed in the lowest entry level of College English was significantly different from the number of English courses failed by students placed in any of the other entry level English courses. In this sample students who were placed in the lowest entry level of College English failed, on average, at least three times as many English courses as those placed in any of the other English entry level courses. These results are significant enough that they will be brought to the attention of the appropriate College administration. The results of this research also appeared to indicate that the most significant determining factor in a student's likelihood of completing his or her English and Humanities courses is his or her parents' place of birth (both-born-in-Canada or not-both-born-in-Canada). Students who had at least one parent who was not born in Canada, would, on average, fail a significantly higher number of English courses, be significantly more likely to still have at least one English course left to complete by the end of the 5th semester, fail a significantly higher number of Humanities courses, be significantly more likely to still have at least one Humanities course to complete by the end of the 5th semester and have significantly more combined English and Humanities courses to complete at the end of their 5th semester than students with both parents born in Canada. This strong association between students' parents' place of birth and their likelihood of succeeding in their English and Humanities courses within the three years of their program appears to indicate that acculturation may be a more significant factor than either language or high school averages, for which no significant association was found for any of the English and Humanities related dependent variables. Although the sample size for this research was only 60 students and more research needs to be conducted in this area, to see if these results are supported with other groups within the College, these results are still significant. If the College can identify, at admission, the students who will be more likely to have difficulty in completing their English and Humanities courses, the College will now have the opportunity to intercede during or before the first semester, and offer these students the support they require in order to increase their chances of success in their education, whether it be classes or courses designed to meet their specific needs, special mentoring, tutoring or other forms of support. With the necessary support, the identified students will have a greater opportunity of successfully completing their programs within the scheduled three years, while at the same time the College will have improved its capacity to meeting the needs of its students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to survey farmers knowledge and practices on the management of pastures, stocking rates and markets of meat goat-producing enterprises within New South Wales and Queensland, Australia. An interview-based questionnaire was conducted on properties that derived a significant proportion of their income from goats. The survey covered 31 landholders with a total land area of 567 177 ha and a reported total of 160 010 goats. A total of 55% (17/31) of producers were involved in both opportunistic harvesting and commercial goat operations, and 45% (14/31) were specialised seedstock producers. Goats were the most important livestock enterprise on 55% (17/31) of surveyed properties. Stocking rate varied considerably (0.3?9.3 goats/ha) within and across surveyed properties and was found to be negatively associated with property size and positively associated with rainfall. Overall, 81% (25/31) of producers reported that the purpose of running goats on their properties was to target international markets. Producers also cited the importance of targeting markets as a way to increase profitability. Fifty-three percent of producers were located over 600 km from a processing plant and the high cost of freight can limit the continuity of goats supplied to abattoirs. Fencing was an important issue for goat farmers, with many producers acknowledging this could potentially add to capital costs associated with better goat management and production. Producers in the pastoral regions appear to have a low investment in pasture development and opportunistic goat harvesting appears to be an important source of income.