914 resultados para object-oriented languages
Resumo:
The well-known difficulties students exhibit when learning to program are often characterised as either difficulties in understanding the problem to be solved or difficulties in devising and coding a computational solution. It would therefore be helpful to understand which of these gives students the greatest trouble. Unit testing is a mainstay of large-scale software development and maintenance. A unit test suite serves not only for acceptance testing, but is also a form of requirements specification, as exemplified by agile programming methodologies in which the tests are developed before the corresponding program code. In order to better understand students’ conceptual difficulties with programming, we conducted a series of experiments in which students were required to write both unit tests and program code for non-trivial problems. Their code and tests were then assessed separately for correctness and ‘coverage’, respectively. The results allowed us to directly compare students’ abilities to characterise a computational problem, as a unit test suite, and develop a corresponding solution, as executable code. Since understanding a problem is a pre-requisite to solving it, we expected students’ unit testing skills to be a strong predictor of their ability to successfully implement the corresponding program. Instead, however, we found that students’testing abilities lag well behind their coding skills.
Resumo:
Lyngbya majuscula is a cyanobacterium (blue-green algae) occurring naturally in tropical and subtropical coastal areas worldwide. Deception Bay, in Northern Moreton Bay, Queensland, has a history of Lyngbya blooms, and forms a case study for this investigation. The South East Queensland (SEQ) Healthy Waterways Partnership, collaboration between government, industry, research and the community, was formed to address issues affecting the health of the river catchments and waterways of South East Queensland. The Partnership coordinated the Lyngbya Research and Management Program (2005-2007) which culminated in a Coastal Algal Blooms (CAB) Action Plan for harmful and nuisance algal blooms, such as Lyngbya majuscula. This first phase of the project was predominantly of a scientific nature and also facilitated the collection of additional data to better understand Lyngbya blooms. The second phase of this project, SEQ Healthy Waterways Strategy 2007-2012, is now underway to implement the CAB Action Plan and as such is more management focussed. As part of the first phase of the project, a Science model for the initiation of a Lyngbya bloom was built using Bayesian Networks (BN). The structure of the Science Bayesian Network was built by the Lyngbya Science Working Group (LSWG) which was drawn from diverse disciplines. The BN was then quantified with annual data and expert knowledge. Scenario testing confirmed the expected temporal nature of bloom initiation and it was recommended that the next version of the BN be extended to take this into account. Elicitation for this BN thus occurred at three levels: design, quantification and verification. The first level involved construction of the conceptual model itself, definition of the nodes within the model and identification of sources of information to quantify the nodes. The second level included elicitation of expert opinion and representation of this information in a form suitable for inclusion in the BN. The third and final level concerned the specification of scenarios used to verify the model. The second phase of the project provides the opportunity to update the network with the newly collected detailed data obtained during the previous phase of the project. Specifically the temporal nature of Lyngbya blooms is of interest. Management efforts need to be directed to the most vulnerable periods to bloom initiation in the Bay. To model the temporal aspects of Lyngbya we are using Object Oriented Bayesian networks (OOBN) to create ‘time slices’ for each of the periods of interest during the summer. OOBNs provide a framework to simplify knowledge representation and facilitate reuse of nodes and network fragments. An OOBN is more hierarchical than a traditional BN with any sub-network able to contain other sub-networks. Connectivity between OOBNs is an important feature and allows information flow between the time slices. This study demonstrates more sophisticated use of expert information within Bayesian networks, which combine expert knowledge with data (categorized using expert-defined thresholds) within an expert-defined model structure. Based on the results from the verification process the experts are able to target areas requiring greater precision and those exhibiting temporal behaviour. The time slices incorporate the data for that time period for each of the temporal nodes (instead of using the annual data from the previous static Science BN) and include lag effects to allow the effect from one time slice to flow to the next time slice. We demonstrate a concurrent steady increase in the probability of initiation of a Lyngbya bloom and conclude that the inclusion of temporal aspects in the BN model is consistent with the perceptions of Lyngbya behaviour held by the stakeholders. This extended model provides a more accurate representation of the increased risk of algal blooms in the summer months and show that the opinions elicited to inform a static BN can be readily extended to a dynamic OOBN, providing more comprehensive information for decision makers.
Resumo:
This paper presents a vulnerability within the generic object oriented substation event (GOOSE) communication protocol. It describes an exploit of the vulnerability and proposes a number of attack variants. The attacks sends GOOSE frames containing higher status numbers to the receiving intelligent electronic device (IED). This prevents legitimate GOOSE frames from being processed and effectively causes a hijacking of the communication channel, which can be used to implement a denial–of–service (DoS) or manipulate the subscriber (unless a status number roll-over occurs). The authors refer to this attack as a poisoning of the subscriber. A number of GOOSE poisoning attacks are evaluated experimentally on a test bed and demonstrated to be successful.
Resumo:
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific sub-regions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.
Resumo:
Recent studies have linked the ability of novice (CS1) programmers to read and explain code with their ability to write code. This study extends earlier work by asking CS2 students to explain object-oriented data structures problems that involve recursion. Results show a strong correlation between ability to explain code at an abstract level and performance on code writing and code reading test problems for these object-oriented data structures problems. The authors postulate that there is a common set of skills concerned with reasoning about programs that explains the correlation between writing code and explaining code. The authors suggest that an overly exclusive emphasis on code writing may be detrimental to learning to program. Non-code writing learning activities (e.g., reading and explaining code) are likely to improve student ability to reason about code and, by extension, improve student ability to write code. A judicious mix of code-writing and code-reading activities is recommended.
Resumo:
Although there are many approaches for developing secure programs, they are not necessarily helpful for evaluating the security of a pre-existing program. Software metrics promise an easy way of comparing the relative security of two programs or assessing the security impact of modifications to an existing one. Most studies in this area focus on high level source code but this approach fails to take compiler-specific code generation into account. In this work we describe a set of object-oriented Java bytecode security metrics which are capable of assessing the security of a compiled program from the point of view of potential information flow. These metrics can be used to compare the security of programs or assess the effect of program modifications on security using a tool which we have developed to automatically measure the security of a given Java bytecode program in terms of the accessibility of distinguished ‘classified’ attributes.
Resumo:
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work supplements rule-based reasoning with case based reasoning and intelligent information retrieval. This research, specifies an approach to the case based retrieval problem which relies heavily on an extended object-oriented / rule-based system architecture that is supplemented with causal background information. Machine learning techniques and a distributed agent architecture are used to help simulate the reasoning process of lawyers. In this paper, we outline our implementation of the hybrid IKBALS II Rule Based Reasoning / Case Based Reasoning system. It makes extensive use of an automated case representation editor and background information.
Resumo:
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval. When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers. Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process. Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions. The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.
Resumo:
CAAS is a rule-based expert system, which provides advice on the Victorial Credit Act 1984. It is currently in commercial use, and has been developed in conjunction with a law firm. It uses an object-oriented hybrid reasoning approach. The system was initially prototyped using the expert system shell NExpert Object, and was then converted into the C++ language. In this paper we describe the advantages that this methodology has, for both commercial and research development.
Resumo:
Approaches to art-practice-as-research tend to draw a distinction between the processes of creative practice and scholarly reflection. According to this template, the two sites of activity – studio/desk, work/writing, body/mind – form the ‘correlative’ entity known as research. Creative research is said to be produced by the navigation of world and thought: spaces that exist in a continual state of tension with one another. Either we have the studio tethered to brute reality while the desk floats free as a site for the fluid cross-pollination of texts and concepts. Or alternatively, the studio is characterized by the amorphous, intuitive play of forms and ideas, while the desk represents its cartography, mapping and fixing its various fluidities. In either case, the research status of art practice is figured as a fundamentally riven space. However, the nascent philosophy of Speculative Realism proposes a different ontology – one in which the space of human activity comprises its own reality, independent of human perception. The challenge it poses to traditional metaphysics is to rethink the world as if it were a real space. When applied to practice-led research, this reconceptualization challenges the creative researcher to consider creative research as a contiguous space – a topology where thinking and making are not dichotomous points but inflections in an amorphous and dynamic field. Instead of being subject to the vertical tension between earth and air, a topology of practice emphasizes its encapsulated, undulating reality – an agentive ‘object’ formed according to properties of connectedness, movement and differentiation. Taking the central ideas of Quentin Meillassoux and Graham Harman as a point of departure, this paper will provide a speculative account of the interplay of spatialities that characterise the author’s studio practice. In so doing, the paper will model the innovative methodological potential produced by the analysis of topological dimensions of the studio and the way they can be said to move beyond the ‘geo-critical’ divide.
Resumo:
Conventions of the studio presuppose the artist as the active agent, imposing his/her will upon and through objects that remain essentially inert. However, this characterisation of practice overlooks the complex object dynamics that underpin the art-making process. Far from passive entities, objects are resistant, ‘speaking back’ to the artist, impressing their will upon their surroundings. Objects stick to one another, fall over, drip, spill, spatter and chip one another. Objects support, dismantle, cover and transform one another. Objects are both the apparatus of the studio and its products. It can be argued that the work of art is as much shaped by objects as it is by human impulse. Within this alternate ontology, the artist becomes but one element in a constellation of objects. Drawing upon Graham Harman’s Object-Oriented Ontology and a selection of photographs of my studio processes, this practice-led paper will explore the notion of agentive objects and the ways in which the contemporary art studio can be reconsidered as a primary site for the production of new object relationships.
Resumo:
This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.
Resumo:
The loss and degradation of forest cover is currently a globally recognised problem. The fragmentation of forests is further affecting the biodiversity and well-being of the ecosystems also in Kenya. This study focuses on two indigenous tropical montane forests in the Taita Hills in southeastern Kenya. The study is a part of the TAITA-project within the Department of Geography in the University of Helsinki. The study forests, Ngangao and Chawia, are studied by remote sensing and GIS methods. The main data includes black and white aerial photography from 1955 and true colour digital camera data from 2004. This data is used to produce aerial mosaics from the study areas. The land cover of these study areas is studied by visual interpretation, pixel-based supervised classification and object-oriented supervised classification. The change of the forest cover is studied with GIS methods using the visual interpretations from 1955 and 2004. Furthermore, the present state of the study forests is assessed with leaf area index and canopy closure parameters retrieved from hemispherical photographs as well as with additional, previously collected forest health monitoring data. The canopy parameters are also compared with textural parameters from digital aerial mosaics. This study concludes that the classification of forest areas by using true colour data is not an easy task although the digital aerial mosaics are proved to be very accurate. The best classifications are still achieved with visual interpretation methods as the accuracies of the pixel-based and object-oriented supervised classification methods are not satisfying. According to the change detection of the land cover in the study areas, the area of indigenous woodland in both forests has decreased in 1955 2004. However in Ngangao, the overall woodland area has grown mainly because of plantations of exotic species. In general, the land cover of both study areas is more fragmented in 2004 than in 1955. Although the forest area has decreased, forests seem to have a more optimistic future than before. This is due to the increasing appreciation of the forest areas.
Resumo:
Road transport and infrastructure has a fundamental meaning for the developing world. Poor quality and inadequate coverage of roads, lack of maintenance operations and outdated road maps continue to hinder economic and social development in the developing countries. This thesis focuses on studying the present state of road infrastructure and its mapping in the Taita Hills, south-east Kenya. The study is included as a part of the TAITA-project by the Department of Geography, University of Helsinki. The road infrastructure of the study area is studied by remote sensing and GIS based methodology. As the principal dataset, true colour airborne digital camera data from 2004, was used to generate an aerial image mosaic of the study area. Auxiliary data includes SPOT satellite imagery from 2003, field spectrometry data of road surfaces and relevant literature. Road infrastructure characteristics are interpreted from three test sites using pixel-based supervised classification, object-oriented supervised classifications and visual interpretation. Road infrastructure of the test sites is interpreted visually from a SPOT image. Road centrelines are then extracted from the object-oriented classification results with an automatic vectorisation process. The road infrastructure of the entire image mosaic is mapped by applying the most appropriate assessed data and techniques. The spectral characteristics and reflectance of various road surfaces are considered with the acquired field spectra and relevant literature. The results are compared with the experimented road mapping methods. This study concludes that classification and extraction of roads remains a difficult task, and that the accuracy of the results is inadequate regardless of the high spatial resolution of the image mosaic used in this thesis. Visual interpretation, out of all the experimented methods in this thesis is the most straightforward, accurate and valid technique for road mapping. Certain road surfaces have similar spectral characteristics and reflectance values with other land cover and land use. This has a great influence for digital analysis techniques in particular. Road mapping is made even more complicated by rich vegetation and tree canopy, clouds, shadows, low contrast between roads and surroundings and the width of narrow roads in relation to the spatial resolution of the imagery used. The results of this thesis may be applied to road infrastructure mapping in developing countries on a more general context, although with certain limits. In particular, unclassified rural roads require updated road mapping schemas to intensify road transport possibilities and to assist in the development of the developing world.
Resumo:
Most Java programmers would agree that Java is a language that promotes a philosophy of “create and go forth”. By design, temporary objects are meant to be created on the heap, possibly used and then abandoned to be collected by the garbage collector. Excessive generation of temporary objects is termed “object churn” and is a form of software bloat that often leads to performance and memory problems. To mitigate this problem, many compiler optimizations aim at identifying objects that may be allocated on the stack. However, most such optimizations miss large opportunities for memory reuse when dealing with objects inside loops or when dealing with container objects. In this paper, we describe a novel algorithm that detects bloat caused by the creation of temporary container and String objects within a loop. Our analysis determines which objects created within a loop can be reused. Then we describe a source-to-source transformation that efficiently reuses such objects. Empirical evaluation indicates that our solution can reduce upto 40% of temporary object allocations in large programs, resulting in a performance improvement that can be as high as a 20% reduction in the run time, specifically when a program has a high churn rate or when the program is memory intensive and needs to run the GC often.