956 resultados para Probabilistic robotics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are few population-based studies of renal dysfunction and none conducted in developing countries. In the present study the prevalence and predictors of elevated serum creatinine levels (SCr > or = 1.3 mg/dl for men and 1.1 mg/dl for women) were determined among Brazilian adults (18-59 years) and older adults (>60 years). Participants included all older adults (N = 1742) and a probabilistic sample of adults (N = 818) from Bambuí town, MG, Southeast Brazil. Predictors were investigated using multiple logistic regression. Mean SCr levels were 0.77 ± 0.15 mg/dl for adults, 1.02 ± 0.39 mg/dl for older men, and 0.81 ± 0.17 mg/dl for older women. Because there were only 4 cases (0.48%) with elevated SCr levels among adults, the analysis of elevated SCr levels was restricted to older adults. The overall prevalence of elevated SCr levels among the elderly was 5.09% (76/1494). The prevalence of hypercreatinemia increased significantly with age (chi² = 26.17, P = 0.000), being higher for older men (8.19%) than for older women (5.29%, chi² = 5.00, P = 0.02). Elevated SCr levels were associated with age 70-79 years (odds ratio [OR] = 2.25, 95% confidence interval [CI]: 1.15-4.42), hypertension (OR = 3.04, 95% CI: 1.34-6.92), use of antihypertensive drugs (OR = 2.46, 95% CI: 1.26-4.82), chest pain (OR = 3.37, 95% CI: 1.31-8.74), and claudication (OR = 3.43, 95% CI: 1.30-9.09) among men, and with age >80 years (OR = 4.88, 95% CI: 2.24-10.65), use of antihypertensive drugs (OR = 4.06, 95% CI: 1.67-9.86), physical inactivity (OR = 2.11, 95% CI: 1.11-4.02) and myocardial infarction (OR = 3.89, 95% CI: 1.58-9.62) among women. The prevalence of renal dysfunction observed was much lower than that reported in other population-based studies, but predictors were similar. New investigations are needed to confirm the variability in prevalence and associated factors of renal dysfunction among populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alcohol is part of the history of humanity, seemingly as a result of countless factors including the easy production of alcoholic beverages in practically all regions of the world. The authors studied aspects of the use of and the dependence on alcohol in Brazil, through a household survey conducted by Centro Brasileiro de Informações sobre Drogas Psicotrópicas (CEBRID). A total of 8,589 interviews were held in 107 of the largest cities in Brazil, all of them with more than 200 thousand inhabitants. The study was planned to gather information within the household environment about a stratified probabilistic sample obtained in three selection phases: 1) the censitaire sectors for each municipality, 2) a systematic randomized sampling, and 3) drafting a respondent by lot in each household to provide information. Approximately 11.2% of the subjects were concerned with their own consumption of alcohol. The signs/symptoms of the syndrome of dependence evident in a greater percentage were the desire to stop or reduce the use of alcohol and to stop or reduce resorting to alcoholic beverages more often than desired, as reported by 14.5 and 9.4% of the respondents, respectively. The regions in Brazil with the highest percentage of dependents were the North (16.3%) and the Northeast (19.9%). According to the estimates obtained in the survey, 5.2% of the teenagers were concerned about the use of alcohol. The estimates obtained in this survey reveal a need to implant specific preventive programs for the problem of alcohol, especially for the very young.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ydinvoimaloissa käytetään toiminnallisia syvyyssuuntaisia puolustustasoja ydinturvallisuuden varmistamiseksi. Puolustuksen viidennessä ja viimeisessä tasossa pyritään lieventämään vakavan onnettomuuden ympäristövaikutuksia ja väestöön kohdistuvaa säteilyaltistusta. Suojelutoimien onnistumisen kannalta on tärkeää pystyä arvioimaan etukäteen radioaktiivisen päästön suuruus ja ajankohta mahdollisimman tarkasti. Tässä diplomityössä on esitelty radioaktiivisen päästön suuruuteen ja ajankohtaan vaikuttavat ilmiöt sekä niihin liittyvät merkittävät epävarmuudet. Ydinvoimalaitosten turvallisuusjärjestelmien osalta tarkastelun kohteena ovat suomalaiset käynnissä olevat reaktorit Olkiluoto 1 & 2 sekä Loviisa 1 & 2. Kaikissa Suomen laitoksissa on käytössä vakavan onnettomuuden hallintaan soveltuvia järjestelmiä ja toimintoja. Työssä etsittiin tietoa eri maiden radioaktiivisen päästön ennustamiseen käytettävistä ohjelmista. Eri mailla on eri toimintaperiaatteilla ja laajuuksilla toimivia ohjelmia. Osassa työkaluja käytetään ennalta laskettuja tuloksia ja osassa onnettomuustilanteet lasketaan onnettomuuden aikana. Lisäksi lähivuosina Euroopassa on tavoitteena kehittää yhteistyömaille yhteisiä valmiuskäyttöön soveltuvia ohjelmia. Työssä kehitettiin uusi valmiustyökalu Säteilyturvakeskuksen käyttöön Microsoft Excelin VBAohjelmoinnin avulla. Valmiustyökalu hyödyntää etukäteen laskettujen todennäköisyyspohjaisten analyysien onnettomuussekvenssejä. Tällöin valmiustilanteessa laitoksen tilanteen kehittymistä on mahdollista arvioida suojarakennuksen toimintakyvyn perusteella. Valmiustyökalu pyrittiin kehittämään mahdollisimman helppokäyttöiseksi ja helposti päivitettäväksi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mass transfer kinetics in osmotic dehydration is usually modeled by Fick's law, empirical models and probabilistic models. The aim of this study was to determine the applicability of Peleg model to investigate the mass transfer during osmotic dehydration of mackerel (Scomber japonicus) slices at different temperatures. Osmotic dehydration was performed on mackerel slices by cooking-infusion in solutions with glycerol and salt (a w = 0.64) at different temperatures: 50, 70, and 90 ºC. Peleg rate constant (K1) (h(g/gdm)-1) varied with temperature variation from 0.761 to 0.396 for water loss, from 5.260 to 2.947 for salt gain, and from 0.854 to 0.566 for glycerol intake. In all cases, it followed the Arrhenius relationship (R²>0.86). The Ea (kJ / mol) values obtained were 16.14; 14.21, and 10.12 for water, salt, and glycerol, respectively. The statistical parameters that qualify the goodness of fit (R²>0.91 and RMSE<0.086) indicate promising applicability of Peleg model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Different axioms underlie efficient market theory and Keynes's liquidity preference theory. Efficient market theory assumes the ergodic axiom. Consequently, today's decision makers can calculate with actuarial precision the future value of all possible outcomes resulting from today's decisions. Since in an efficient market world decision makers "know" their intertemporal budget constraints, decision makers never default on a loan, i.e., systemic defaults, insolvencies, and bankruptcies are impossible. Keynes liquidity preference theory rejects the ergodic axiom. The future is ontologically uncertain. Accordingly systemic defaults and insolvencies can occur but can never be predicted in advance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työn tavoitteena on selvittää, minkälaisia mahdollisuuksia digitaalinen tarinankerronta antaa peruskouluissa. Työssä käsitellään digitaalinen tarinankerronta ja se, miten sitä hyödynnetään opetuksessa. Työn taustana on opetushallituksen laatima opetussuunnitelma 2016. Opetussuunnitelmassa uutena on ohjelmointi, jota käsitellään työssä vähän tarkemmin. Tulevaisuudessa teknologia, kuten koodaus ja robotiikka sekä lisätty todellisuus voivat tukea luovuutta, innovatiivisuutta ja ongelmanratkaisukykyä. Työ on kirjallisuuskatsaus, jossa aihetta analysoidaan lähdekirjallisuuden avulla. Digitaalisella tarinankerronnalla luokkahuoneessa on rajattomat mahdollisuudet. Digitaalinen tarinankerronta tukee uuden opetussuunnitelman tavoitteita. Digitaalisen tarinankerronnan avulla voidaan osallistaa lapset oppimisprosessiin, heidän omia vahvuuksia saadaan esille sekä he pääsevät itse oivaltamaan ja ratkomaan ongelmia. Ohjelmointi, robotiikka ja lisätty todellisuus antavat uusia työkaluja opetukseen. Ohjelmointi on älyllisesti motivoiva ajattelutapa. Teknologian käyttö opetuksessa lisää opiskelumotivaatiota ja yhdessä tekemisen iloa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The representation of a perceptual scene by a computer is usually limited to numbers representing dimensions and colours. The theory of affordances attempted to provide a new way of representing an environment, with respect to a particular agent. The view was introduced as part of an entire field of psychology labeled as 'ecological,' which has since branched into computer science through the field of robotics, and formal methods. This thesis will describe the concept of affordances, review several existing formalizations, and take a brief look at applications to robotics. The formalizations put forth in the last 20 years have no agreed upon structure, only that both the agent and the environment must be taken in relation to one another. Situation theory has also been evolving since its inception in 1983 by Barwise & Perry. The theory provided a formal way to represent any arbitrary piece of information in terms of relations. This thesis will take a toy version of situation theory published in CSLI lecture notes no. 22, and add to the given ontologies. This thesis extends the given ontologies to include specialized affordance types, and individual object types. This allows for the definition of semantic objects called environments, which support a situation and a set of affordances, and niches which refer to a set of actions for an individual. Finally, a possible way for an environment to change into a new environment is suggested via the activation of an affordance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A full understanding of public affairs requires the ability to distinguish between the policies that voters would like the government to adopt, and the influence that different voters or group of voters actually exert in the democratic process. We consider the properties of a computable equilibrium model of a competitive political economy in which the economic interests of groups of voters and their effective influence on equilibrium policy outcomes can be explicitly distinguished and computed. The model incorporates an amended version of the GEMTAP tax model, and is calibrated to data for the United States for 1973 and 1983. Emphasis is placed on how the aggregation of GEMTAP households into groups within which economic and political behaviour is assumed homogeneous affects the numerical representation of interests and influence for representative members of each group. Experiments with the model suggest that the changes in both interests and influence are important parts of the story behind the evolution of U.S. tax policy in the decade after 1973.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Affiliation: Département de Biochimie, Faculté de médecine, Université de Montréal

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Affiliation: Département de Biochimie, Université de Montréal