52 resultados para Distributed object


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The state of the object-oriented programming course in Lappeenranta University of Technology had reached the point, where it required changes to provide better learning opportunities and thus the learning outcomes. Based on the student feedback the course was partially dated and ineffective. The components of the course were analysed and the ineffective elements were removed and new methods were introduced to improve the course. The major changes included the change from traditional teaching methods to reverse classroom method and the use of Java as the programming language. The changes were measured by the student feedback, lecturer’s observations and comparison to previous years. The feedback suggested that the changes were successful; the course received higher overall grade than before.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questions concerning perception are as old as the field of philosophy itself. Using the first-person perspective as a starting point and philosophical documents, the study examines the relationship between knowledge and perception. The problem is that of how one knows what one immediately perceives. The everyday belief that an object of perception is known to be a material object on grounds of perception is demonstrated as unreliable. It is possible that directly perceived sensible particulars are mind-internal images, shapes, sounds, touches, tastes and smells. According to the appearance/reality distinction, the world of perception is the apparent realm, not the real external world. However, the distinction does not necessarily refute the existence of the external world. We have a causal connection with the external world via mind-internal particulars, and therefore we have indirect knowledge about the external world through perceptual experience. The research especially concerns the reasons for George Berkeley’s claim that material things are mind-dependent ideas that really are perceived. The necessity of a perceiver’s own qualities for perceptual experience, such as mind, consciousness, and the brain, supports the causal theory of perception. Finally, it is asked why mind-internal entities are present when perceiving an object. Perception would not directly discern material objects without the presupposition of extra entities located between a perceiver and the external world. Nevertheless, the results show that perception is not sufficient to know what a perceptual object is, and that the existence of appearances is necessary to know that the external world is being perceived. However, the impossibility of matter does not follow from Berkeley’s theory. The main result of the research is that singular knowledge claims about the external world never refer directly and immediately to the objects of the external world. A perceiver’s own qualities affect how perceptual objects appear in a perceptual situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this thesis is to define and validate a software engineering approach for the development of a distributed system for the modeling of composite materials, based on the analysis of various existing software development methods. We reviewed the main features of: (1) software engineering methodologies; (2) distributed system characteristics and their effect on software development; (3) composite materials modeling activities and the requirements for the software development. Using the design science as a research methodology, the distributed system for creating models of composite materials is created and evaluated. Empirical experiments which we conducted showed good convergence of modeled and real processes. During the study, we paid attention to the matter of complexity and importance of distributed system and a deep understanding of modern software engineering methods and tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liberalization of electricity markets has resulted in a competed Nordic electricity market, in which electricity retailers play a key role as electricity suppliers, market intermediaries, and service providers. Although these roles may remain unchanged in the near future, the retailers’ operation may change fundamentally as a result of the emerging smart grid environment. Especially the increasing amount of distributed energy resources (DER), and improving opportunities for their control, are reshaping the operating environment of the retailers. This requires that the retailers’ operation models are developed to match the operating environment, in which the active use of DER plays a major role. Electricity retailers have a clientele, and they operate actively in the electricity markets, which makes them a natural market party to offer new services for end-users aiming at an efficient and market-based use of DER. From the retailer’s point of view, the active use of DER can provide means to adapt the operation to meet the challenges posed by the smart grid environment, and to pursue the ultimate objective of the retailer, which is to maximize the profit of operation. This doctoral dissertation introduces a methodology for the comprehensive use of DER in an electricity retailer’s short-term profit optimization that covers operation in a variety of marketplaces including day-ahead, intra-day, and reserve markets. The analysis results provide data of the key profit-making opportunities and the risks associated with different types of DER use. Therefore, the methodology may serve as an efficient tool for an experienced operator in the planning of the optimal market-based DER use. The key contributions of this doctoral dissertation lie in the analysis and development of the model that allows the retailer to benefit from profit-making opportunities brought by the use of DER in different marketplaces, but also to manage the major risks involved in the active use of DER. In addition, the dissertation introduces an analysis of the economic potential of DER control actions in different marketplaces including the day-ahead Elspot market, balancing power market, and the hourly market of Frequency Containment Reserve for Disturbances (FCR-D).