949 resultados para empirical methods
Resumo:
Storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time. It is a difficult task because it requires first the detection of events from news articles published in different time periods and then the construction of storylines by linking events into coherent news stories. Moreover, each storyline has different hierarchical structures which are dependent across epochs. Existing approaches often ignore the dependency of hierarchical structures in storyline generation. In this paper, we propose an unsupervised Bayesian model, called dynamic storyline detection model, to extract structured representations and evolution patterns of storylines. The proposed model is evaluated on a large scale news corpus. Experimental results show that our proposed model outperforms several baseline approaches.
Resumo:
Purpose. The goal of this study is to improve the favorable molecular interactions between starch and PPC by addition of grafting monomers MA and ROM as compatibilizers, which would advance the mechanical properties of starch/PPC composites. ^ Methodology. DFT and semi-empirical methods based calculations were performed on three systems: (a) starch/PPC, (b) starch/PPC-MA, and (c) starch-ROM/PPC. Theoretical computations involved the determination of optimal geometries, binding-energies and vibrational frequencies of the blended polymers. ^ Findings. Calculations performed on five starch/PPC composites revealed hydrogen bond formation as the driving force behind stable composite formation, also confirmed by the negative relative energies of the composites indicating the existence of binding forces between the constituent co-polymers. The interaction between starch and PPC is also confirmed by the computed decrease in stretching CO and OH group frequencies participating in hydrogen bond formation, which agree qualitatively with the experimental values. ^ A three-step mechanism of grafting MA on PPC was proposed to improve the compatibility of PPC with starch. Nine types of 'blends' produced by covalent bond formation between starch and MA-grafted PPC were found to be energetically stable, with blends involving MA grafted at the 'B' and 'C' positions of PPC indicating a binding-energy increase of 6.8 and 6.2 kcal/mol, respectively, as compared to the non-grafted starch/PPC composites. A similar increase in binding-energies was also observed for three types of 'composites' formed by hydrogen bond formation between starch and MA-grafted PPC. ^ Next, grafting of ROM on starch and subsequent blend formation with PPC was studied. All four types of blends formed by the reaction of ROM-grafted starch with PPC were found to be more energetically stable as compared to the starch/PPC composite and starch/PPC-MA composites and blends. A blend of PPC and ROM grafted at the ' a&d12; ' position on amylose exhibited a maximal increase of 17.1 kcal/mol as compared with the starch/PPC-MA blend. ^ Conclusions. ROM was found to be a more effective compatibilizer in improving the favorable interactions between starch and PPC as compared to MA. The ' a&d12; ' position was found to be the most favorable attachment point of ROM to amylose for stable blend formation with PPC.^
Resumo:
Consumers have relationships with other people, and they have relationships with brands similar to the ones they have with other people. Yet, very little is known about how brand and interpersonal relationships relate to one another. Even less is known about how they jointly affect consumer well-being. The goal of this research, therefore, is to examine how brand and interpersonal relationships influence and are influenced by consumer well-being. Essay 1 uses both empirical methods and surveys from individuals and couples to investigate how consumer preferences in romantic couples, namely brand compatibility, influences life satisfaction. Using traditional statistical techniques and multilevel modeling, I find that the effect of brand compatibility, or the extent to which individuals have similar brand preferences, on life satisfaction depends upon power in the relationship. For high power partners, brand compatibility has no effect on life satisfaction. On the other hand, for low power partners, low brand compatibility is associated with decreased life satisfaction. I find that conflict mediates the link between brand compatibility and power on life satisfaction. In Essay 2 I again use empirical methods and surveys to investigate how resources, which can be considered a form of consumer well-being, influence brand and interpersonal relations. Although social connections have long been considered a fundamental human motivation and deemed necessary for well-being (Baumeister and Leary 1995), recent research has demonstrated that having greater resources is associated with weaker social connections. In the current research I posit that individuals with greater resources still have a need to connect and are using other sources for connection, namely brands. Across several studies I test and find support for my theory that resource level shifts the preference of social connection from people to brands. Specifically, I find that individuals with greater resources have stronger brand relationships, as measured by self-brand connection, brand satisfaction, purchase intentions and willingness to pay with both existing brand relationships and with new brands. This suggests that individuals with greater resources place more emphasis on these relationships. Furthermore, I find that resource level influences the stated importance of brand and interpersonal relationships, and that having or perceiving greater resources is associated with an increased preference to engage with brands over people. This research demonstrates that there are times when people prefer and seek out connections with brands over other people, and highlights the ways in which our brand and interpersonal relationships influence one another.
Resumo:
Community-driven Question Answering (CQA) systems that crowdsource experiential information in the form of questions and answers and have accumulated valuable reusable knowledge. Clustering of QA datasets from CQA systems provides a means of organizing the content to ease tasks such as manual curation and tagging. In this paper, we present a clustering method that exploits the two-part question-answer structure in QA datasets to improve clustering quality. Our method, {\it MixKMeans}, composes question and answer space similarities in a way that the space on which the match is higher is allowed to dominate. This construction is motivated by our observation that semantic similarity between question-answer data (QAs) could get localized in either space. We empirically evaluate our method on a variety of real-world labeled datasets. Our results indicate that our method significantly outperforms state-of-the-art clustering methods for the task of clustering question-answer archives.
Resumo:
Este estudo incide sobre as características que a presença do ião flúor em moléculas concede. Mais concretamente em fluoroquinolonas, antibióticos que cada vez são mais utilizados. Fez-se uma analise de vários parâmetros para obtermos informação sobre a interação fármaco-receptor nas fluoroquinolonas. Sendo para isso utilizadas técnicas de caracterização química computacional para conseguirmos caracterizar eletronicamente e estruturalmente (3D) as fluoroquinolonas em complemento aos métodos semi-empíricos utilizados inicialmente. Como é sabido, a especificidade e a afinidade para o sitio alvo, é essencial para eficácia de um fármaco. As fluoroquinolonas sofreram um grande desenvolvimento desde a primeira quinolona sintetizada em 1958, sendo que desde ai foram sintetizadas inúmeros derivados da mesma. Este facto deve-se a serem facilmente manipuladas, derivando fármacos altamente potentes, espectro alargado, factores farmacocinéticos optimizados e efeitos adversos reduzidos. A grande alteração farmacológica para o aumento do interesse neste grupo, foi a substituição em C6 de um átomo de flúor em vez de um de hidrogénio. Para obtermos as informações sobre a influência do ião flúor sobre as propriedades estruturais e electrónicas das fluoroquinolonas, foi feita uma comparação entre a fluoroquinolona com flúor em C6 e com hidrogénio em C6. As quatro fluoroquinolonas presentes neste estudo foram: ciprofloxacina, moxiflocacina, sparfloxacina e pefloxacina. As informações foram obtidas por programas informáticos de mecânica quântica e molecular. Concluiu-se que a presença de substituinte flúor não modificava de forma significativa a geometria das moléculas mas sim a distribuição da carga no carbono vicinal e nos átomos em posição alfa, beta e gama relativamente a este. Esta modificação da distribuição electrónica pode condicionar a ligação do fármaco ao receptor, modificando a sua actividade farmacológica.
Resumo:
Nach der Biographie der österreichischen Pädagogin und Psychologin Elsa Köhler (1879-1940) werden in diesem Beitrag ihre Pionierleistungen bei der Grundlegung der empirischen Bildungsforschung beschrieben. Als Lehrerin war sie früh um den Einbezug des Entwicklungsstands von Schülern in die Didaktik im Sinne der Entwicklung differentieller Unterrichtsansätze bemüht. Am Psychologischen Institut der Universität Wien lernte sie bei Karl Bühler die für longitudinale Einzelfallanalysen der Entwicklung von Kindern und Jugendlichen konzipierten quantitativen und qualitativen Beobachtungs- und Protokolltechniken kennen und weitete diese Methoden als erste auf die pädagogische Situation im Unterricht, auf Schülergruppen und auf die Analyse der Entwicklung ganzer Schulklassen aus. Sie trug Wesentliches dazu bei, dass empirische Forschungsmethoden in reformpädagogische Ansätze der 1920er und 1930er Jahre Eingang fanden und machte ihre in der pädagogischen Situation durchgeführten Entwicklungsanalysen für die Entwicklungsberatung zur Optimierung der Selbststeuerung von Schülern fruchtbar. Elsa Köhler verband Grundlagenforschung mit einem starken Anwendungsbezug in den klassischen Bereichen der auf die Kindheit und das Jugendalter bezogenen Entwicklungspsychologie sowie in den Bereichen der Pädagogischen Psychologie und Pädagogik, die heute unter der Bildungsforschung subsumiert werden. Die Beschäftigung mit ihr ist von fachhistorischer Bedeutung und kann zudem auch Impulse für die moderne interdisziplinär ausgerichtete Bildungsforschung geben. (DIPF/Orig.)
Resumo:
Nitrobenzoxadiazole (NBD)-labeled lipids are popular fluorescent membrane probes. However, the understanding of important aspects of the photophysics of NBD remains incomplete, including the observed shift in the emission spectrum of NBD-lipids to longer wavelengths following excitation at the red edge of the absorption spectrum (red-edge excitation shift or REES). REES of NBD-lipids in membrane environments has been previously interpreted as reflecting restricted mobility of solvent surrounding the fluorophore. However, this requires a large change in the dipole moment (Dm) of NBD upon excitation. Previous calculations of the value of Dm of NBD in the literature have been carried out using outdated semi-empirical methods, leading to conflicting values. Using up-to-date density functional theory methods, we recalculated the value of Dm and verified that it is rather small (B2 D). Fluorescence measurements confirmed that the value of REES is B16 nm for 1,2-dioleoyl-sn-glycero-3- phospho-L-serine-N-(NBD) (NBD-PS) in dioleoylphosphatidylcholine vesicles. However, the observed shift is independent of both the temperature and the presence of cholesterol and is therefore insensitive to the mobility and hydration of the membrane. Moreover, red-edge excitation leads to an increased contribution of the decay component with a shorter lifetime, whereas time-resolved emission spectra of NBD-PS displayed an atypical blue shift following excitation. This excludes restrictions to solvent relaxation as the cause of the measured REES and TRES of NBD, pointing instead to the heterogeneous transverse location of probes as the origin of these effects. The latter hypothesis was confirmed by molecular dynamics simulations, from which the calculated heterogeneity of the hydration and location of NBD correlated with the measured fluorescence lifetimes/REES. Globally, our combination of theoretical and experiment-based techniques has led to a considerably improved understanding of the photophysics of NBD and a reinterpretation of its REES in particular.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The atomistic pseudopotential quantum mechanical calculations are used to study the transport in million atom nanosized metal-oxide-semiconductor field-effect transistors. In the charge self-consistent calculation, the quantum mechanical eigenstates of closed systems instead of scattering states of open systems are calculated. The question of how to use these eigenstates to simulate a nonequilibrium system, and how to calculate the electric currents, is addressed. Two methods to occupy the electron eigenstates to yield the charge density in a nonequilibrium condition are tested and compared. One is a partition method and another is a quasi-Fermi level method. Two methods are also used to evaluate the current: one uses the ballistic and tunneling current approximation, another uses the drift-diffusion method. (C) 2009 American Institute of Physics. [doi:10.1063/1.3248262]
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
The objective of this book is to present the quantitative techniques that are commonly employed in empirical finance research together with real world, state of the art research examples. Each chapter is written by international experts in their fields. The unique approach is to describe a question or issue in finance and then to demonstrate the methodologies that may be used to solve it. All of the techniques described are used to address real problems rather than being presented for their own sake and the areas of application have been carefully selected so that a broad range of methodological approaches can be covered. This book is aimed primarily at doctoral researchers and academics who are engaged in conducting original empirical research in finance. In addition, the book will be useful to researchers in the financial markets and also advanced Masters-level students who are writing dissertations.
Resumo:
Previous research has shown multiple benefits and challenges with the incorporation of children’s literature in the English as a Second language (ESL) classroom. In addition, the use of children’s literature in the lower elementary English classroom is recommended by the Swedish National Agency for Education. Consequently, the current study explores how teachers in Swedish elementary school teach ESL through children’s literature. This empirical study involves English teachers from seven schools in a small municipality in Sweden. The data has been collected through an Internet survey. The study also connects the results to previous international research, comparing Swedish and international research. The results suggest that even though there are many benefits of using children’s literature in the ESL classroom, the respondents seldom use these authentic texts, due to limited time and a narrow supply of literature, among other factors. However, despite these challenges, all of the teachers claim to use children’s literature by reading aloud in the classroom. Based on the results, further research exploring pupils’ thoughts in contrast to teachers would be beneficial. In addition, the majority of the participants expressed that they wanted more information on how to use children’s literature. Therefore, additional research relating to beneficial methods of teaching English through children’s literature, especially in Sweden, is recommended.
Resumo:
OBJECTIVE: Meta-analysis of studies of the accuracy of diagnostic tests currently uses a variety of methods. Statistically rigorous hierarchical models require expertise and sophisticated software. We assessed whether any of the simpler methods can in practice give adequately accurate and reliable results. STUDY DESIGN AND SETTING: We reviewed six methods for meta-analysis of diagnostic accuracy: four simple commonly used methods (simple pooling, separate random-effects meta-analyses of sensitivity and specificity, separate meta-analyses of positive and negative likelihood ratios, and the Littenberg-Moses summary receiver operating characteristic [ROC] curve) and two more statistically rigorous approaches using hierarchical models (bivariate random-effects meta-analysis and hierarchical summary ROC curve analysis). We applied the methods to data from a sample of eight systematic reviews chosen to illustrate a variety of patterns of results. RESULTS: In each meta-analysis, there was substantial heterogeneity between the results of different studies. Simple pooling of results gave misleading summary estimates of sensitivity and specificity in some meta-analyses, and the Littenberg-Moses method produced summary ROC curves that diverged from those produced by more rigorous methods in some situations. CONCLUSION: The closely related hierarchical summary ROC curve or bivariate models should be used as the standard method for meta-analysis of diagnostic accuracy.