915 resultados para Database search Evidential value Bayesian decision theory Influence diagrams


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decision theory is the study of models of judgement involved in, and leading to, deliberate and (usually) rational choice. In real estate investment there are normative models for the allocation of assets. These asset allocation models suggest an optimum allocation between the respective asset classes based on the investors’ judgements of performance and risk. Real estate is selected, as other assets, on the basis of some criteria, e.g. commonly its marginal contribution to the production of a mean variance efficient multi asset portfolio, subject to the investor’s objectives and capital rationing constraints. However, decisions are made relative to current expectations and current business constraints. Whilst a decision maker may believe in the required optimum exposure levels as dictated by an asset allocation model, the final decision may/will be influenced by factors outside the parameters of the mathematical model. This paper discusses investors' perceptions and attitudes toward real estate and highlights the important difference between theoretical exposure levels and pragmatic business considerations. It develops a model to identify “soft” parameters in decision making which will influence the optimal allocation for that asset class. This “soft” information may relate to behavioural issues such as the tendency to mirror competitors; a desire to meet weight of money objectives; a desire to retain the status quo and many other non-financial considerations. The paper aims to establish the place of property in multi asset portfolios in the UK and examine the asset allocation process in practice, with a view to understanding the decision making process and to look at investors’ perceptions based on an historic analysis of market expectation; a comparison with historic data and an analysis of actual performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Since its introduction in 2006, messages posted to the microblogging system Twitter have provided a rich dataset for researchers, leading to the publication of over a thousand academic papers. This paper aims to identify this published work and to classify it in order to understand Twitter based research. DESIGN/METHODOLOGY/APPROACH: Firstly the papers on Twitter were identified. Secondly, following a review of the literature, a classification of the dimensions of microblogging research was established. Thirdly, papers were qualitatively classified using open coded content analysis, based on the paper’s title and abstract, in order to analyze method, subject, and approach. FINDINGS: The majority of published work relating to Twitter concentrates on aspects of the messages sent and details of the users. A variety of methodological approaches are used across a range of identified domains. RESEARCH LIMITATIONS/IMPLICATIONS: This work reviewed the abstracts of all papers available via database search on the term “Twitter” and this has two major implications: 1) the full papers are not considered and so works may be misclassified if their abstract is not clear, 2) publications not indexed by the databases, such as book chapters, are not included. ORIGINALITY/VALUE: To date there has not been an overarching study to look at the methods and purpose of those using Twitter as a research subject. Our major contribution is to scope out papers published on Twitter until the close of 2011. The classification derived here will provide a framework within which researchers studying Twitter related topics will be able to position and ground their work

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The UK has adopted legally binding carbon reduction targets of 34% by 2020 and 80% by 2050 (measured against the 1990 baseline). Buildings are estimated to be responsible for more than 50% of greenhouse gas (GHG) emissions in the UK. These consist of both operational, produced during use, and embodied, produced during manufacture of materials and components, and during construction, refurbishments and demolition. A brief assessment suggests that it is unlikely that UK emission reduction targets can be met without substantial reductions in both Oc and Ec. Oc occurs over the lifetime of a building whereas the bulk of Ec occurs at the start of a building’s life. A time value for emissions could influence the decision making process when it comes to comparing mitigation measures which have benefits that occur at different times. An example might be the choice between building construction using low Ec construction materials versus building construction using high Ec construction materials but with lower Oc, although the use of high Ec materials does not necessarily imply a lower Oc. Particular time related issues examined here are: the urgency of the need to achieve large emissions reductions during the next 10 to 20 years; the earlier effective action is taken, the less costly it will be; future reduction in carbon intensity of energy supply; the carbon cycle and relationship between the release of GHG’s and their subsequent concentrations in the atmosphere. An equation is proposed, which weights emissions according to when they occur during the building life cycle, and which effectively increases Ec as a proportion of the total, suggesting that reducing Ec is likely to be more beneficial, in terms of climate change, for most new buildings. Thus, giving higher priority to Ec reductions is likely to result in a bigger positive impact on climate change and mitigation costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We transform a non co-operati ve game into a -Bayesian decision problem for each player where the uncertainty faced by a player is the strategy choices of the other players, the pr iors of other players on the choice of other players, the priors over priors and so on.We provide a complete characterization between the extent of knowledge about the rationality of players and their ability to successfulIy eliminate strategies which are not best responses. This paper therefore provides the informational foundations of iteratively unàominated strategies and rationalizable strategic behavior (Bernheim (1984) and Pearce (1984». Moreover, sufficient condi tions are also found for Nash equilibrium behavior. We also provide Aumann's (1985) results on correlated equilibria .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dentro da literatura de Comportamento do Consumidor e Teoria da Decisão existe considerável corpo teórico que analisa sentimentos negativos e reações adversas no processo decisório de compras de produtos de alto e baixo envolvimento. Vários fenômenos são identificados como negativos no processo, principalmente a Confusão do Consumidor, que compreende três dimensões: i) muitas informações similares sobre produtos, ii) muitas informações sobre diferentes produtos e iii) informações falsas e ambíguas. Tal fenômeno, no entanto, parece ser moderado por um conjunto de variáveis, como o Envolvimento, a Experiência e a Restrição de Tempo (moderadoras da relação entre Confusão do Consumidor e Intenção de Compra). Este fato foi identificado através de entrevistas em profundidade. Os resultados das entrevistas permitiram identificar as variáveis moderadoras, assim como a existência do fenômeno e sua relação com a decisão final de compra. Na segunda fase da pesquisa, supõe-se que indivíduos com baixo Envolvimento e Restrição de Tempo possuam uma propensão maior à confusão. No Estudo 2 foram utilizados como moderadores o Envolvimento e a Restrição de Tempo, ambos manipulados por instrução, sendo as variáveis dependentes a Intenção de Compra e a Confusão do Consumidor. Os resultados do Estudo 2 permitiram inferir que existem diferenças significativas entre os grupos, quando analisada a variável Confusão do Consumidor, mas, em alguns grupos, a Intenção de Compra não era significativamente diferente. No Estudo 3 foram manipuladas a Experiência (forte e fraca) e a Confusão do Consumidor, sendo a variável dependente a Intenção de Compra. Os resultados do Estudo 3 também permitiram inferir que existem diferenças significativas entre os grupos na Intenção de compra, quando consideradas baixa ou alta confusão, assim como Experiência forte ou fraca. Na última fase da pesquisa foram destacadas as estratégias dos consumidores para lidar com o fenômeno Confusão do Consumidor. Tais estratégias, muitas vezes, são mediadoras de comportamentos posteriores, como a compra do produto. No Estudo 4 manipulou-se a Confusão do Consumidor em duas de suas dimensões. Foi possível destacar a preponderância da estratégia por busca de informações e postergação da decisão, quando o consumidor se depara com situações confusas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pesquisa sobre o processo de estratégia busca responder questões complexas de como as estratégias são formadas, executadas e modificadas, capturando a complexa e dinâmica relação entre o conteúdo da estratégia e seu contexto de utilização. Apesar da vasta literatura existente, relativamente pouco se sabe sobre como os processos afetam de fato a estratégia, tornando o trabalho em processo de estratégia mais sobre processos que estratégia. Este trabalho visa contribuir no conhecimento da estratégia empresarial, utilizando a estratégia como prática como maneira de observar no campo o fenômeno do processo de estratégia e explorando as práticas de estratégia na organização. Esta pesquisa se baseia num estudo de caso único exploratório em uma empresa fabricante de equipamentos de telecomunicações, líder global em tecnologia nos segmentos onde atua, identificando as mudanças no processo de estratégia ocorridas após a troca de CEO e seus resultados observados nas práticas da estratégia. São investigadas algumas das mudanças estratégicas ocorridas: realinhamento das soluções para novos negócios, foco em conteúdo para as redes celulares para captura de valor e internacionalização. São utilizadas algumas informações de mercado de instituições internacionais relevantes sobre o crescimento da internet, infra-estrutura de comunicações e mobilidade. Da academia, foram aproveitados conceitos para ampliar o entendimento do contexto e análise do ambiente externo a organização: economia da informação, internet e custos de transação, cadeia de valor e teoria dos jogos e velocidade evolutiva da indústria; e ambiente interno: estratégia, sua formação e prática, planejamento estratégico, valor da inovação nos negócios, internacionalização e influência da estrutura. Analisando-se o conjunto acima, observou-se o aumento da importância do tema da estratégia e de sua complexidade nas indústrias relacionadas de alguma forma com a internet, sugerindo, por exemplo, interpretações alternativas no relacionamento entre a estratégia e a estrutura organizacional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente estudo de caso investiga a possibilidade de utilização de métricas de gestão de valor na análise econômica de projetos de investimento e, por consequência, objetiva auxiliar no processo de tomada de decisões acerca de investimentos. Para tal, o arcabouço teórico fundamenta-se na teoria de gestão baseada em valor e análises tradicionais de avaliação de projetos de investimentos. O objetivo é verificar a aderência entre as ferramentas tradicionais (VPL – valor presente líquido) e as novas, baseadas em valor (EVA - valor econômico agregado, CVA - valor adicionado base caixa), amplamente utilizadas para medir o desempenho empresarial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The caruncle is a structure present in the micropylar region of Euphorbiaceae seeds. This structure has the ecological function of promoting seed dispersal by ants (myrmecochory), but it is debated whether it also has an agronomical importance influencing seed germination. The influence of the caruncle on castor (Ricinus communis) seed germination was evaluated under low soil water content and high soil salinity. Seeds were germinated at soil water storage capacities varying from 22 to 50% and salinities (NaCl) varying from 0 to 10 dS m(-1) The germination (%) increased following the increments in soil moisture. hut the caruncle had no influence on this process at any moisture level. In one genotype. more root dry mass was produced when caruncle was excised. Increasing salinity reduced the percentage and speed of germination of castor seeds, but no influence of caruncle was detected. No evidence of caruncle influencing castor seed germination was found under low soil water content and high salinity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Educação Matemática - IGCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a bivariate distribution for the bivariate survival times based on Farlie-Gumbel-Morgenstern copula to model the dependence on a bivariate survival data. The proposed model allows for the presence of censored data and covariates. For inferential purpose a Bayesian approach via Markov Chain Monte Carlo (MCMC) is considered. Further, some discussions on the model selection criteria are given. In order to examine outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. The newly developed procedures are illustrated via a simulation study and a real dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wasserlösliche organische Verbindungen (WSOCs) sind Hauptbestandteile atmosphärischer Aerosole, die bis zu ~ 50% und mehr der organischen Aerosolfraktion ausmachen. Sie können die optischen Eigenschaften sowie die Hygroskopizität von Aerosolpartikeln und damit deren Auswirkungen auf das Klima beeinflussen. Darüber hinaus können sie zur Toxizität und Allergenität atmosphärischer Aerosole beitragen.In dieser Studie wurde Hochleistungsflüssigchromatographie gekoppelt mit optischen Diodenarraydetektion und Massenspektrometrie (HPLC-DAD-MS und HPLC-MS/MS) angewandt, um WSOCs zu analysieren, die für verschiedene Aerosolquellen und -prozesse charakteristisch sind. Niedermolekulare Carbonsäuren und Nitrophenole wurden als Indikatoren für die Verbrennung fossiler Brennstoffe und die Entstehung sowie Alterung sekundärer organischer Aerosole (SOA) aus biogenen Vorläufern untersucht. Protein-Makromoleküle wurden mit Blick auf den Einfluss von Luftverschmutzung und Nitrierungsreaktionen auf die Allergenität primärer biologischer Aerosolpartikel – wie Pollen und Pilzsporen – untersucht.rnFilterproben von Grob- und Feinstaubwurden über ein Jahr hinweg gesammelt und auf folgende WSOCs untersucht: die Pinen-Oxidationsprodukte Pinsäure, Pinonsäure und 3-Methyl-1,2,3-Butantricarbonsäure (3-MBTCA) sowie eine Vielzahl anderer Dicarbonsäuren und Nitrophenole. Saisonale Schwankungen und andere charakteristische Merkmale werden mit Blick auf Aerosolquellen und -senken im Vergleich zu Daten anderen Studien und Regionen diskutiert. Die Verhätlnisse von Adipinsäure und Phthalsäure zu Azelainsäure deuten darauf hin, dass die untersuchten Aerosolproben hauptsächlich durch biogene Quellen beeinflusst werden. Eine ausgeprägte Arrhenius-artige Korrelation wurde zwischen der 3-MBTCA-¬Konzentration und der inversen Temperatur beobachtet (R2 = 0.79, Ea = 126±10 kJ mol-1, Temperaturbereich 275–300 K). Modellrechnungen zeigen, dass die Temperaturabhängigkeit auf eine Steigerung der photochemischen Produktionsraten von 3-MBTCA durch erhöhte OH-Radikal-Konzentrationen bei erhöhten Temperaturen zurückgeführt werden kann. Im Vergleich zur chemischen Reaktionskinetik scheint der Einfluss von Gas-Partikel-Partitionierungseffekten nur eine untergeordnete Rolle zu spielen. Die Ergebnisse zeigen, dass die OH-initiierte Oxidation von Pinosäure der geschwindigkeitsbestimmende Schritt der Bildung von 3-MBTCA ist. 3-MBTCA erscheint somit als Indikator für die chemische Alterung von biogener sekundärer organischer Aerosole (SOA) durch OH-Radikale geeignet. Eine Arrhenius-artige Temperaturabhängigkeit wurde auch für Pinäure beobachtet und kann durch die Temperaturabhängigkeit der biogenen Pinen-Emissionen als geschwindigkeitsbestimmender Schritt der Pinsäure-Bildung erklärt werden (R2 = 0.60, Ea = 84±9 kJ mol-1).rn rnFür die Untersuchung von Proteinnitrierungreaktionen wurde nitrierte Protein¬standards durch Flüssigphasenreaktion von Rinderserumalbumin (BSA) und Ovalbumin (OVA) mit Tetranitromethan (TNM) synthetisiert.Proteinnitrierung erfolgt vorrangig an den Resten der aromatischen Aminosäure Tyrosin auf, und mittels UV-Vis-Photometrie wurde der Proteinnnitrierungsgrad (ND) bestimmt. Dieser ist definiert als Verhältnis der mittleren Anzahl von Nitrotyrosinresten zur Tyrosinrest-Gesamtzahl in den Proteinmolekülen. BSA und OVA zeigten verschiedene Relationen zwischen ND und TNM/Tyrosin-Verhältnis im Reaktionsgemisch, was vermutlich auf Unterschiede in den Löslichkeiten und den molekularen Strukturen der beiden Proteine zurück zu führen ist.rnDie Nitrierung von BSA und OVA durch Exposition mit einem Gasgemisch aus Stickstoffdioxid (NO2) und Ozon (O3) wurde mit einer neu entwickelten HPLC-DAD-¬Analysemethode untersucht. Diese einfache und robuste Methode erlaubt die Bestimmung des ND ohne Hydrolyse oder Verdau der untersuchten Proteine und ernöglicht somit eine effiziente Untersuchung der Kinetik von Protein¬nitrierungs-Reaktionen. Für eine detaillierte Produktstudien wurden die nitrierten Proteine enzymatisch verdaut, und die erhaltenen Oligopeptide wurden mittels HPLC-MS/MS und Datenbankabgleich mit hoher Sequenzübereinstimmung analysiert. Die Nitrierungsgrade individueller Nitrotyrosin-Reste (NDY) korrelierten gut mit dem Gesamt-Proteinnitrierungsgrad (ND), und unterschiedliche Verhältnisse von NDY zu ND geben Aufschluss über die Regioselektivität der Reaktion. Die Nitrierungmuster von BSA und OVA nach Beahndlung mit TNM deuten darauf hin, dass die Nachbarschaft eines negativ geladenen Aminosäurerestes die Tyrosinnitrierung fördert. Die Behandlung von BSA durch NO2 und O3 führte zu anderend Nitrierungemustern als die Behandlung mit TNM, was darauf hindeutet, dass die Regioselektivität der Nitrierung vom Nitrierungsmittel abhängt. Es zeigt sich jedoch, dass Tyrosinreste in Loop-Strukturen bevorzugt und unabhängig vom Reagens nitriert werden.Die Methoden und Ergebnisse dieser Studie bilden eine Grundlage für weitere, detaillierte Untersuchungen der Reaktionskinetik sowie der Produkte und Mechanismen von Proteinnitrierungreaktionen. Sie sollen helfen, die Zusammenhänge zwischen verkehrsbedingten Luftschadstoffen wie Stickoxiden und Ozon und der Allergenität von Luftstaub aufzuklären.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE To assess the survival outcomes and reported complications of screw- and cement-retained fixed reconstructions supported on dental implants. MATERIALS AND METHODS A Medline (PubMed), Embase, and Cochrane electronic database search from 2000 to September 2012 using MeSH and free-text terms was conducted. Selected inclusion and exclusion criteria guided the search. All studies were first reviewed by abstract and subsequently by full-text reading by two examiners independently. Data were extracted by two examiners and statistically analyzed using a random effects Poisson regression. RESULTS From 4,324 abstracts, 321 full-text articles were reviewed. Seventy-three articles were found to qualify for inclusion. Five-year survival rates of 96.03% (95% confidence interval [CI]: 93.85% to 97.43%) and 95.55% (95% CI: 92.96% to 97.19%) were calculated for cemented and screw-retained reconstructions, respectively (P = .69). Comparison of cement and screw retention showed no difference when grouped as single crowns (I-SC) (P = .10) or fixed partial dentures (I-FDP) (P = .49). The 5-year survival rate for screw-retained full-arch reconstructions was 96.71% (95% CI: 93.66% to 98.31). All-ceramic reconstruction material exhibited a significantly higher failure rate than porcelain-fused-to-metal (PFM) in cemented reconstructions (P = .01) but not when comparing screw-retained reconstructions (P = .66). Technical and biologic complications demonstrating a statistically significant difference included loss of retention (P ≤ .01), abutment loosening (P ≤ .01), porcelain fracture and/or chipping (P = .02), presence of fistula/suppuration (P ≤ .001), total technical events (P = .03), and total biologic events (P = .02). CONCLUSIONS Although no statistical difference was found between cement- and screw-retained reconstructions for survival or failure rates, screw-retained reconstructions exhibited fewer technical and biologic complications overall. There were no statistically significant differences between the failure rates of the different reconstruction types (I-SCs, I-FDPs, full-arch I-FDPs) or abutment materials (titanium, gold, ceramic). The failure rate of cemented reconstructions was not influenced by the choice of a specific cement, though cement type did influence loss of retention.