849 resultados para Computer Science, theory and methods
Resumo:
This workshop will snapshot Bourdieu's sociology. In recognition of Bourdieu's work as a powerful theoretical instrument to speculate the reproduction of social orders and cultural values, the workshop will firstly discuss the core concepts of habitus, capital, and field – the foundational triad of Bourdieu's sociology. Although Bourdieu's original work was built on some quantitative studies, his sociology has been largely qualitatively used in education research. Different from the bulk of extant research, the workshop will secondly showcase some quantitative and mixed methods research that uses a Bourdieusian framework. Mindful of such a framework helping understand social practice at a macro level, the workshop will then make an attempt to think through the macro and the micro by weaving together Bourdieu's sociology with Garfinkel's ethnomethodology. The workshop will conclude with some reflections and communications in terms of how to better realise the full value of Bourdieu in education research.
Resumo:
I agree with Costanza and Finkelstein (2015) that it is futile to further invest in the study of generational differences in the work context due to a lack of appropriate theory and methods. The key problem with the generations concept is that splitting continuous variables such as age or time into a few discrete units involves arbitrary cutoffs and atheoretical groupings of individuals (e.g., stating that all people born between the early 1960s and early 1980s belong to Generation X). As noted by methodologists, this procedure leads to a loss of information about individuals and reduced statistical power (MacCallum, Zhang, Preacher, & Rucker, 2002). Due to these conceptual and methodological limitations, I regard it as very difficult if not impossible to develop a “comprehensive theory of generations” (Costanza & Finkelstein, p. 20) and to rigorously examine generational differences at work in empirical studies.
Resumo:
Mode of access: Internet.
Resumo:
"AEC Contract AT930-1)-2137."
Resumo:
Bibliographical footnotes. Bibliography: p. 360.
Resumo:
In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index imagersquos multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partitionrsquos center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images haves similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the ldquodimensionality curserdquo existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms imagersquos text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partitionrsquos center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude.
Resumo:
A number of integrations of the state-based specification language Object-Z and the process algebra CSP have been proposed in recent years. In developing such integrations, a number of semantic decisions have to be made. In particular, what happens when an operation's precondition is not satisfied? Is the operation blocked, i.e., prevented from occurring, or can it occur with an undefined result? Also, are outputs from operations angelic, satisfying the environment's constraints on them, or are they demonic and not influenced by the environment at all? In this paper we discuss the differences between the models, and show that by adopting a blocking model of preconditions together with an angelic model of outputs one can specify systems at higher levels of abstraction.
Resumo:
Multiresolution (or multi-scale) techniques make it possible for Web-based GIS applications to access large dataset. The performance of such systems relies on data transmission over network and multiresolution query processing. In the literature the latter has received little research attention so far, and the existing methods are not capable of processing large dataset. In this paper, we aim to improve multiresolution query processing in an online environment. A cost model for such query is proposed first, followed by three strategies for its optimization. Significant theoretical improvement can be observed when comparing against available methods. Application of these strategies is also discussed, and similar performance enhancement can be expected if implemented in online GIS applications.
Resumo:
Well understood methods exist for developing programs from given specifications. A formal method identifies proof obligations at each development step: if all such proof obligations are discharged, a precisely defined class of errors can be excluded from the final program. For a class of closed systems such methods offer a gold standard against which less formal approaches can be measured. For open systems -those which interact with the physical world- the task of obtaining the program specification can be as challenging as the task of deriving the program. And, when a system of this class must tolerate certain kinds of unreliability in the physical world, it is still more challenging to reach confidence that the specification obtained is adequate. We argue that widening the notion of software development to include specifying the behaviour of the relevant parts of the physical world gives a way to derive the specification of a control system and also to record precisely the assumptions being made about the world outside the computer.
Resumo:
In recent years, it has been observed that software clones and plagiarism are becoming an increased threat for one?s creativity. Clones are the results of copying and using other?s work. According to the Merriam – Webster dictionary, “A clone is one that appears to be a copy of an original form”. It is synonym to duplicate. Clones lead to redundancy of codes, but not all redundant code is a clone.On basis of this background knowledge ,in order to safeguard one?s idea and to avoid intentional code duplication for pretending other?s work as if their owns, software clone detection should be emphasized more. The objective of this paper is to review the methods for clone detection and to apply those methods for finding the extent of plagiarism occurrence among the Swedish Universities in Master level computer science department and to analyze the results.The rest part of the paper, discuss about software plagiarism detection which employs data analysis technique and then statistical analysis of the results.Plagiarism is an act of stealing and passing off the idea?s and words of another person?s as one?s own. Using data analysis technique, samples(Master level computer Science thesis report) were taken from various Swedish universities and processed in Ephorus anti plagiarism software detection. Ephorus gives the percentage of plagiarism for each thesis document, from this results statistical analysis were carried out using Minitab Software.The results gives a very low percentage of Plagiarism extent among the Swedish universities, which concludes that Plagiarism is not a threat to Sweden?s standard of education in computer science.This paper is based on data analysis, intelligence techniques, EPHORUS software plagiarism detection tool and MINITAB statistical software analysis.
Resumo:
This paper analyzes the relationship among research collaboration, number of documents and number of citations of computer science research activity. It analyzes the number of documents and citations and how they vary by number of authors. They are also analyzed (according to author set cardinality) under different circumstances, that is, when documents are written in different types of collaboration, when documents are published in different document types, when documents are published in different computer science subdisciplines, and, finally, when documents are published by journals with different impact factor quartiles. To investigate the above relationships, this paper analyzes the publications listed in the Web of Science and produced by active Spanish university professors between 2000 and 2009, working in the computer science field. Analyzing all documents, we show that the highest percentage of documents are published by three authors, whereas single-authored documents account for the lowest percentage. By number of citations, there is no positive association between the author cardinality and citation impact. Statistical tests show that documents written by two authors receive more citations per document and year than documents published by more authors. In contrast, results do not show statistically significant differences between documents published by two authors and one author. The research findings suggest that international collaboration results on average in publications with higher citation rates than national and institutional collaborations. We also find differences regarding citation rates between journals and conferences, across different computer science subdisciplines and journal quartiles as expected. Finally, our impression is that the collaborative level (number of authors per document) will increase in the coming years, and documents published by three or four authors will be the trend in computer science literature.