28 resultados para probabilistic skepticism
Resumo:
Robotic grasping has been studied increasingly for a few decades. While progress has been made in this field, robotic hands are still nowhere near the capability of human hands. However, in the past few years, the increase in computational power and the availability of commercial tactile sensors have made it easier to develop techniques that exploit the feedback from the hand itself, the sense of touch. The focus of this thesis lies in the use of this sense. The work described in this thesis focuses on robotic grasping from two different viewpoints: robotic systems and data-driven grasping. The robotic systems viewpoint describes a complete architecture for the act of grasping and, to a lesser extent, more general manipulation. Two central claims that the architecture was designed for are hardware independence and the use of sensors during grasping. These properties enables the use of multiple different robotic platforms within the architecture. Secondly, new data-driven methods are proposed that can be incorporated into the grasping process. The first of these methods is a novel way of learning grasp stability from the tactile and haptic feedback of the hand instead of analytically solving the stability from a set of known contacts between the hand and the object. By learning from the data directly, there is no need to know the properties of the hand, such as kinematics, enabling the method to be utilized with complex hands. The second novel method, probabilistic grasping, combines the fields of tactile exploration and grasp planning. By employing well-known statistical methods and pre-existing knowledge of an object, object properties, such as pose, can be inferred with related uncertainty. This uncertainty is utilized by a grasp planning process which plans for stable grasps under the inferred uncertainty.
Resumo:
This master's thesis examines scenarios, which can lead to a reactivity excursion due to boron dilution events in Loviisa nuclear power plant. This thesis also describes how the boron diluted slugs are modelled in the Probabilistic Risk Assessment (PRA) model. At the current model, the valuation of the reactivity risk due to boron dilution has been very conservative, and therefore the reactivity risk during an outage is as much as 9 % of Core Damage Frequency (CDF) and Large Release Frequency (LRF). The main objective of the thesis is to decrease the annual core damage and Large Release Frequency by reducing conservative assumptions in the probabilistic modelling of boron dilution events. The core behavior during boron dilution events was modelled and reported in 2011 by Fortum using three-dimensional core model of Apros. The results of these analyses were reported same year by Fortum. According to the reported results and the analyses made by StarNode visualization program, it seems that some changes could be made in the boron dilution fault trees of the PRA model. As a result, the reactivity risk was decreased to 0.7 % of the annual CDF. In the other words, the total annual CDF and LRF were decreased 8.5 % due to the changes made in the PRA model. However the analyses made by Apros include some uncertainty. The accuracy of the analyses should be validated before making these changes in the official PRA model.
Resumo:
Tässä diplomityössä kehitetään Loviisan voimalaitoksen todennäköisyyspohjaisen paloriskianalyysin kaapelitietokantaa tulevaisuuden haasteita varten. Tietokannan kehittämistä varten tutustutaan todennäköisyyspohjaiseen riskianalyysiin varsin-kin paloriskianalyysin osalta. Käytännönläheisempää kehittämistä varten tutustu-taan voimalaitoksella nykyisin käytössä oleviin kaapelitietokantoihin: paloriski-tutkimusta varten laadittuun PSA-ELTIEen, kunnossapidon tiedonhallintajärjes-telmä LOMAXiin, sähkö- ja automaatiosuunnitteluyksikköjen arkistoihin sekä automaatiouudistuksen tietokantaan. Tietokannan käytännönläheisempien ominai-suuksien selvittämiseksi voimalaitoksella kokeiltiin kenttätarkastusmenetelmää, joka on ensisijainen kaapelikartoitusmenetelmä. Tietokantoihin tutustumisen perusteella vaihtoehtoisiksi tulevaisuuden tietokan-noiksi mietittiin LOMAXia, PSA-ELTIEtä tai uutta tietokantaa. Tulevaisuuden tietokantavaihtoehdoksi on päädytty ehdottamaan LOMAXia, joka vaatii vähem-män muutoksia muihin vaihtoehtoihin nähden. Tällainen laajalti käytössä oleva yhteinen tietokanta mahdollistaa sen, että tiedot ovat helpommin ja varmemmin kaikkien niitä tarvitsevien käytettävissä ja asiantuntijoiden muokattavissa, millä myös varmistetaan tietojen oikeellisuutta ja pysymistä ajan tasalla. Tulevaan LOMAX päivitykseen on ehdotettu tarpeellisia tietokenttien lisäyksiä ja kaapeli-hierarkian parantamista kaapelitietokannaksi käyttöönottamista varten.
Resumo:
-
Resumo:
Tässä työssä on tarkasteltu Suomessa käytössä olevien ydinvoimalaitosten vuosihuoltojen aikaista käyttöturvallisuutta yleisesti sekä arvioitu voimayhtiöiden vuosihuoltojen aikaisten häiriö- ja hätätilanteiden varalta laatimien ohjeiden kattavuutta. Kattavuuden arviointi suoritettiin tarkastelemalla seisokkitiloja käsitteleviä todennäköisyysperusteista riskianalyysia (PRA), lopullista turvallisuusselostetta (FSAR) ja turvallisuusteknisiä käyttöehtoja (TTKE). PRA:n mukaan Olkiluodon 1 ja 2 laitosyksiköiden sydänvauriotaajuudesta noin 25 % liittyy vuosihuollon aikaisiin alkutapahtumiin. Loviisan laitosyksiköillä vastaava osuus on noin 61 %. Merkittävimmät vuosihuoltojen aikaiset alkutapahtumat sydänvaurioriskin kannalta olivat Olkiluodossa tulipalot, jäähdytteen menetykset ja jälkilämmön poiston menetykset sekä Loviisassa raskaan taakan pudotukset, booripitoisuuden laimeneminen ja öljyonnettomuudet. Saatujen tulosten perusteella voitiin todeta, että voimayhtiöiden laatimat häiriö- ja hätätilanneohjeet olivat pääosiltaan asianmukaiset ja ne kattoivat hyvin erilaiset seisokin aikaiset alkutapahtumat. Tarkastelun perusteella tehtiin ohjeistoon muutamia parannusehdotuksia. Seisokkitiloja koskevat TTKE ja FSAR havaittiin asianmukaisiksi molemmilla tarkastelluilla laitoksilla.
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
The main topic of the thesis is optimal stopping. This is treated in two research articles. In the first article we introduce a new approach to optimal stopping of general strong Markov processes. The approach is based on the representation of excessive functions as expected suprema. We present a variety of examples, in particular, the Novikov-Shiryaev problem for Lévy processes. In the second article on optimal stopping we focus on differentiability of excessive functions of diffusions and apply these results to study the validity of the principle of smooth fit. As an example we discuss optimal stopping of sticky Brownian motion. The third research article offers a survey like discussion on Appell polynomials. The crucial role of Appell polynomials in optimal stopping of Lévy processes was noticed by Novikov and Shiryaev. They described the optimal rule in a large class of problems via these polynomials. We exploit the probabilistic approach to Appell polynomials and show that many classical results are obtained with ease in this framework. In the fourth article we derive a new relationship between the generalized Bernoulli polynomials and the generalized Euler polynomials.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.
Resumo:
Ydinvoimaloissa käytetään toiminnallisia syvyyssuuntaisia puolustustasoja ydinturvallisuuden varmistamiseksi. Puolustuksen viidennessä ja viimeisessä tasossa pyritään lieventämään vakavan onnettomuuden ympäristövaikutuksia ja väestöön kohdistuvaa säteilyaltistusta. Suojelutoimien onnistumisen kannalta on tärkeää pystyä arvioimaan etukäteen radioaktiivisen päästön suuruus ja ajankohta mahdollisimman tarkasti. Tässä diplomityössä on esitelty radioaktiivisen päästön suuruuteen ja ajankohtaan vaikuttavat ilmiöt sekä niihin liittyvät merkittävät epävarmuudet. Ydinvoimalaitosten turvallisuusjärjestelmien osalta tarkastelun kohteena ovat suomalaiset käynnissä olevat reaktorit Olkiluoto 1 & 2 sekä Loviisa 1 & 2. Kaikissa Suomen laitoksissa on käytössä vakavan onnettomuuden hallintaan soveltuvia järjestelmiä ja toimintoja. Työssä etsittiin tietoa eri maiden radioaktiivisen päästön ennustamiseen käytettävistä ohjelmista. Eri mailla on eri toimintaperiaatteilla ja laajuuksilla toimivia ohjelmia. Osassa työkaluja käytetään ennalta laskettuja tuloksia ja osassa onnettomuustilanteet lasketaan onnettomuuden aikana. Lisäksi lähivuosina Euroopassa on tavoitteena kehittää yhteistyömaille yhteisiä valmiuskäyttöön soveltuvia ohjelmia. Työssä kehitettiin uusi valmiustyökalu Säteilyturvakeskuksen käyttöön Microsoft Excelin VBAohjelmoinnin avulla. Valmiustyökalu hyödyntää etukäteen laskettujen todennäköisyyspohjaisten analyysien onnettomuussekvenssejä. Tällöin valmiustilanteessa laitoksen tilanteen kehittymistä on mahdollista arvioida suojarakennuksen toimintakyvyn perusteella. Valmiustyökalu pyrittiin kehittämään mahdollisimman helppokäyttöiseksi ja helposti päivitettäväksi.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
The European Council has invited the European Commission to present the first macro-regional strategy – the EU Strategy for the Baltic Sea Region (EUSBSR) on the 14th of December 2007, primarily to address collective challenges and opportunities of the Region and also to engender cohesion in support of an European integration policy. However, macro-regional strategies conceived to aid European integration and territorial cohesion were viewed by academics with skepticism, obscuring the strategies’ potential impact. This thesis intends to investigate and measure the added value of the EUSBSR in order to analyze its impact on regional development and its feasibility as a guide for future programs intending to strengthen European cohesion and integration. To determine the added value of the EUSBSR the thesis is organized into three sections, so as to address environmental, social, and economic concerns, respectively. The first case examines EU-Russia cooperation in an environmental context to investigate how environmental cooperation with an external neighbor could forge increased cohesion in a macro-regional setting. To figure the added cooperation that academic cooperation among universities would contribute to social dimension, the work has chosen several study results. Lastly, to measure out the added value for the economic strategy objective, the study employs the project for Improved Global Competitiveness in an example of ‘A Baltic Sea Region Program for Innovation, Cluster and SME-Networks’ as an economic plan.