852 resultados para Inference


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Allt hårdare lagkrav gör att det är svårt att energieffektivisera befintliga byggnader utan att förändra deras utseende. Syftet med examensarbetet är att utreda hur stor energieffektivisering, för tre befintliga småhus uppförda under 1900-talet, som är möjlig att uppnå genom förbättring av byggnadernas klimatskal, det vill säga tak, väggar, golv, fönster och dörrar, utan att förvanska byggnadernas utseende och samtidigt bevara deras kulturhistoriska värden. Arbetet bestod av en förstudie där tre byggnader identifierades, ett undersökningsskede där information om byggnaderna togs fram och ett slutsatsskede där energibesparande åtgärdsförslag togs fram och utvärderades. Byggnader som var goda representanter för sin tid och stil söktes. Byggnader från 1910-talet, 1930-talet och 1970-talet, lokaliserades. Sedan gjordes det fallstudier med intervjuer och inventeringar. För att utreda byggnadens klimatskal utfördes u-värdesberäkningar och energiberäkningar av befintliga byggander och byggnader baserade på föreslagna åtgärdsförslag. Ingen av byggnaderna nådde efter föreslagna åtgärder ner till passivhuskravet 59 kWh/år/m2 Atemp eller BBR-kravet 110 kWh/år/m2 Atemp för en byggnads specifika energianvändning. Den största möjliga energieffektivisering för de tre byggnaderna uppförda under 1900-talet, som är möjlig att uppnå utan att förvanska byggnadernas utseende och samtidigt bevara deras kulturhistoriska värden är 13,0 kWh/år/m2 Atemp, 49,7 kWh/år/m2 Atemp respektive 64,8 kWh/år/m2 Atemp. Slutsatser från arbetet är att byggnader från 1910-tal kan åtgärdas genom att isolera fönstren, sätta dit en extra dörr på insidan av ytterdörren samt tilläggsisolera snedtaket. Byggnader från 1930-tal kan åtgärdas genom att isolera fönstren med en isolerruta på insidan av fönstret och dörrarna med en extra dörr på insidan av ytterdörren. Byggnader från 1970-tal kan åtgärda fönstren genom att byta ut dem till energifönster, ingen åtgärd för golvet men fasaden isoleras utvändigt med vakuumisolering. Byggnaden från 1970-talet klarade sig bäst i jämförelsen eftersom den var i autentiskt skick från början vilket gjorde att förbättringen blev större än för till exempel byggnaden från 1910-talet som redan var ombyggd innan åtgärder föreslogs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A decision support system (DSS) was implemented based on a fuzzy logic inference system (FIS) to provide assistance in dose alteration of Duodopa infusion in patients with advanced Parkinson’s disease, using data from motor state assessments and dosage. Three-tier architecture with an object oriented approach was used. The DSS has a web enabled graphical user interface that presents alerts indicating non optimal dosage and states, new recommendations, namely typical advice with typical dose and statistical measurements. One data set was used for design and tuning of the FIS and another data set was used for evaluating performance compared with actual given dose. Overall goodness-of-fit for the new patients (design data) was 0.65 and for the ongoing patients (evaluation data) 0.98. User evaluation is now ongoing. The system could work as an assistant to clinical staff for Duodopa treatment in advanced Parkinson’s disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider methods for estimating causal effects of treatment in the situation where the individuals in the treatment and the control group are self selected, i.e., the selection mechanism is not randomized. In this case, simple comparison of treated and control outcomes will not generally yield valid estimates of casual effects. The propensity score method is frequently used for the evaluation of treatment effect. However, this method is based onsome strong assumptions, which are not directly testable. In this paper, we present an alternative modeling approachto draw causal inference by using share random-effect model and the computational algorithm to draw likelihood based inference with such a model. With small numerical studies and a real data analysis, we show that our approach gives not only more efficient estimates but it is also less sensitive to model misspecifications, which we consider, than the existing methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate speed prediction is a crucial step in the development of a dynamic vehcile activated sign (VAS). A previous study showed that the optimal trigger speed of such signs will need to be pre-determined according to the nature of the site and to the traffic conditions. The objective of this paper is to find an accurate predictive model based on historical traffic speed data to derive the optimal trigger speed for such signs. Adaptive neuro fuzzy (ANFIS), classification and regression tree (CART) and random forest (RF) were developed to predict one step ahead speed during all times of the day. The developed models were evaluated and compared to the results obtained from artificial neural network (ANN), multiple linear regression (MLR) and naïve prediction using traffic speed data collected at four sites located in Sweden. The data were aggregated into two periods, a short term period (5-min) and a long term period (1-hour). The results of this study showed that using RF is a promising method for predicting mean speed in the two proposed periods.. It is concluded that in terms of performance and computational complexity, a simplistic input features to the predicitive model gave a marked increase in the response time of the model whilse still delivering a low prediction error.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Students in upper secondary school write in a number of different genres, and do this in school contexts as well as in their spare time. The study presented here is an overview of this activity and the genres concerned. The theoretical framework of the study is that of genre theory whereby genre is understood as a socially situated concept. The study is based on 2 000 texts gathered from students on different study programmes all over Sweden in the school year of 1996-97. The texts were written in different situations. The most important distinction made here is between test texts (i.e. texts from national tests) and self-chosen texts, which may come from schoolwriting or spare-time writing. The texts are categorized according to genre. This text inventory shows a repertoire of 33 different genres in the text material. A small number of genres, such as story, book-review and expository essay dominate the school writing. The test genres differ from this pattern in that they clearly imitate texts with a genuine communicative intent. The most frequent genres are studied further and each of them is demonstrated by an interpretative reading. This reading shows that the genres differ considerably with respect to genre character and stability of text structure. A quantitative study of text length and variation in vocabulary further shows that texts written by two categories of students, those on vocationally oriented programmes and those on programmes preparing for higher education, differ significantly. Reference cohesion is studied in a smaller sample of the texts. This lexico-semantic mechanism of cohesion proves to exhibit an interrelation with variation in vocabulary as well as with text type. One particular cohesive tie, inference, shows different patterns in texts written by the two categories of students mentioned above.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many solutions to AI problems require the task to be represented in one of a multitude of rigorous mathematical formalisms. The construction of such mathematical models forms a difficult problem which is often left to the user of the problem solver. This void between problem solvers and the problems is studied by the eclectic field of automated modelling. Within this field, compositional modelling, a knowledge-based methodology for system modelling, has established itself as a leading approach. In general, a compositional modeller organises knowledge in a structure of composable fragments that relate to particular system components or processes. Its embedded inference mechanism chooses the appropriate fragments with respect to a given problem, instantiates and assembles them into a consistent system model. Many different types of compositional modeller exist, however, with significant differences in their knowledge representation and approach to inference. This paper examines compositional modelling. It presents a general framework for building and analysing compositional modellers. Based on this framework, a number of influential compositional modellers are examined and compared. The paper also identifies the strengths and weaknesses of compositional modelling and discusses some typical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The predominant knowledge-based approach to automated model construction, compositional modelling, employs a set of models of particular functional components. Its inference mechanism takes a scenario describing the constituent interacting components of a system and translates it into a useful mathematical model. This paper presents a novel compositional modelling approach aimed at building model repositories. It furthers the field in two respects. Firstly, it expands the application domain of compositional modelling to systems that can not be easily described in terms of interacting functional components, such as ecological systems. Secondly, it enables the incorporation of user preferences into the model selection process. These features are achieved by casting the compositional modelling problem as an activity-based dynamic preference constraint satisfaction problem, where the dynamic constraints describe the restrictions imposed over the composition of partial models and the preferences correspond to those of the user of the automated modeller. In addition, the preference levels are represented through the use of symbolic values that differ in orders of magnitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For first-order Horn clauses without equality, resolution is complete with an arbitrary selection of a single literal in each clause [dN 96]. Here we extend this result to the case of clauses with equality for superposition-based inference systems. Our result is a generalization of the result given in [BG 01]. We answer their question about the completeness of a superposition-based system for general clauses with an arbitrary selection strategy, provided there exists a refutation without applications of the factoring inference rule.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sound and complete first-order goal-oriented sequent-type calculus is developed with ``large-block'' inference rules. In particular, the calculus contains formal analogues of such natural proof-search techniques as handling definitions and applying auxiliary propositions.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Basic information theory is used to analyse the amount of confidential information which may be leaked by programs written in a very simple imperative language. In particular, a detailed analysis is given of the possible leakage due to equality tests and if statements. The analysis is presented as a set of syntax-directed inference rules and can readily be automated.