978 resultados para Stated preference methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Motivation is an important driver for health professionals to maintain professional competencies, continue in a workforce and contribute to work tasks. While there is some research about motivation in health workers in low to middle income countries, maternal morbidity and mortality remains high in many low and middle income countries and this can be improved by improving the quality of maternal services and the training and skills maintenance of maternal health workers. This study examines the impact of motivation on maintenance of professional competence among maternal health workers in Vietnam using mixed methods. Methods The study consisted of a survey using a self-administered questionnaire of 240 health workers in 5 districts across two Vietnamese provinces and in-depth interviews with 43 health workers and health managers at the commune, district and provincial level to explore external factors that influenced motivation. The questionnaire includes a 23 item motivation instrument based on Kenyan health context, modified for Vietnamese language and culture. Results The 240 responses represented an estimated 95% of the target sample. Multivariate analysis showed that three factors contributed to the motivation of health workers: access to training (β = -0.14, p=0.03), ability to perform key tasks (β = 0.22, p=0.001), and shift schedule (β = -0.13, p=0.05). Motivation was higher in health workers self-identifying as competent or enabled to provide more care activities. Motivation was lower in those who worked more frequent night shifts and those who had received training in the last 12 months. The interviews identified that the latter was because they felt the training was irrelevant to them, and in some cases, they do not have opportunity to practice their learnt skills. The qualitative data also showed other factors relating to service context and organisational management practices contributed to motivation. Conclusions The study demonstrates the importance of understanding the motivations of health workers and the factors that contribute to this and may contribute to more effective management of the health workforce in low and middle income countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE: Impulsivity is a vulnerability marker for drug addiction in which other behavioural traits such as anxiety and novelty seeking ('sensation seeking') are also widely present. However, inter-relationships between impulsivity, novelty seeking and anxiety traits are poorly understood. OBJECTIVE: The objective of this paper was to investigate the contribution of novelty seeking and anxiety traits to the expression of behavioural impulsivity in rats. METHODS: Rats were screened on the five-choice serial reaction time task (5-CSRTT) for spontaneously high impulsivity (SHI) and low impulsivity (SLI) and subsequently tested for novelty reactivity and preference, assessed by open-field locomotor activity (OF), novelty place preference (NPP), and novel object recognition (OR). Anxiety was assessed on the elevated plus maze (EPM) both prior to and following the administration of the anxiolytic drug diazepam, and by blood corticosterone levels following forced novelty exposure. Finally, the effects of diazepam on impulsivity and visual attention were assessed in SHI and SLI rats. RESULTS: SHI rats were significantly faster to enter an open arm on the EPM and exhibited preference for novelty in the OR and NPP tests, unlike SLI rats. However, there was no dimensional relationship between impulsivity and either novelty-seeking behaviour, anxiety levels, OF activity or novelty-induced changes in blood corticosterone levels. By contrast, diazepam (0.3-3 mg/kg), whilst not significantly increasing or decreasing impulsivity in SHI and SLI rats, did reduce the contrast in impulsivity between these two groups of animals. CONCLUSIONS: This investigation indicates that behavioural impulsivity in rats on the 5-CSRTT, which predicts vulnerability for cocaine addiction, is distinct from anxiety, novelty reactivity and novelty-induced stress responses, and thus has relevance for the aetiology of drug addiction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time, risk, and attention are all integral to economic decision making. The aim of this work is to understand those key components of decision making using a variety of approaches: providing axiomatic characterizations to investigate time discounting, generating measures of visual attention to infer consumers' intentions, and examining data from unique field settings.

Chapter 2, co-authored with Federico Echenique and Kota Saito, presents the first revealed-preference characterizations of exponentially-discounted utility model and its generalizations. My characterizations provide non-parametric revealed-preference tests. I apply the tests to data from a recent experiment, and find that the axiomatization delivers new insights on a dataset that had been analyzed by traditional parametric methods.

Chapter 3, co-authored with Min Jeong Kang and Colin Camerer, investigates whether "pre-choice" measures of visual attention improve in prediction of consumers' purchase intentions. We measure participants' visual attention using eyetracking or mousetracking while they make hypothetical as well as real purchase decisions. I find that different patterns of visual attention are associated with hypothetical and real decisions. I then demonstrate that including information on visual attention improves prediction of purchase decisions when attention is measured with mousetracking.

Chapter 4 investigates individuals' attitudes towards risk in a high-stakes environment using data from a TV game show, Jeopardy!. I first quantify players' subjective beliefs about answering questions correctly. Using those beliefs in estimation, I find that the representative player is risk averse. I then find that trailing players tend to wager more than "folk" strategies that are known among the community of contestants and fans, and this tendency is related to their confidence. I also find gender differences: male players take more risk than female players, and even more so when they are competing against two other male players.

Chapter 5, co-authored with Colin Camerer, investigates the dynamics of the favorite-longshot bias (FLB) using data on horse race betting from an online exchange that allows bettors to trade "in-play." I find that probabilistic forecasts implied by market prices before start of the races are well-calibrated, but the degree of FLB increases significantly as the events approach toward the end.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Adherence to glaucoma medications is essential for successful treatment of the disease but is complex and difficult for many of our patients. Health coaching has been used successfully in the treatment of other chronic diseases. This pilot study explores the use of health coaching for glaucoma care. METHODS: A mixed methods study design was used to assess the health coaching intervention for glaucoma patients. The health coaching intervention consisted of four to six health coaching sessions with a certified health coach via telephone. Quantitative measures included demographic and health information, adherence to glaucoma medications (using the visual analog adherence scale and medication event monitoring system), and an exit survey rating the experience. Qualitative measures included a precoaching health questionnaire, notes made by the coach during the intervention, and an exit interview with the subjects at the end of the study. RESULTS: Four glaucoma patients participated in the study; all derived benefits from the health coaching. Study subjects demonstrated increased glaucoma drop adherence in response to the coaching intervention, in both visual analog scale and medication event monitoring system. Study subjects' qualitative feedback reflected a perceived improvement in both eye and general health self-care. The subjects stated that they would recommend health coaching to friends or family members. CONCLUSION: Health coaching was helpful to the glaucoma patients in this study; it has the potential to improve glaucoma care and overall health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For optimal solutions in health care, decision makers inevitably must evaluate trade-offs, which call for multi-attribute valuation methods. Researchers have proposed using best-worst scaling (BWS) methods which seek to extract information from respondents by asking them to identify the best and worst items in each choice set. While a companion paper describes the different types of BWS, application and their advantages and downsides, this contribution expounds their relationships with microeconomic theory, which also have implications for statistical inference. This article devotes to the microeconomic foundations of preference measurement, also addressing issues such as scale invariance and scale heterogeneity. Furthermore the paper discusses the basics of preference measurement using rating, ranking and stated choice data in the light of the findings of the preceding section. Moreover the paper gives an introduction to the use of stated choice data and juxtaposes BWS with the microeconomic foundations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To determine the factors associated with a home death among older adults who received palliative care nursing home services in the home. Methods: The participants in this retrospective cohort study were 151 family caregivers of patients who had died approximately 9 months prior to the study telephone interview. The interview focused on the last year of life and covered two main areas, patient characteristics and informal caregiver characteristics. Results: Odds ratios [OR] and 95% confidence intervals [95% CI] were used to determine which of the 15 potential informal caregiver and seven patient predictor variables were associated with dying at home. Multivariate analysis revealed that the odds of dying at home were greater when the patient lived with a caregiver [OR = 7.85; 95% CI = (2.35, 26.27)], the patient stated a preference to die at home [OR= 6.51; 95% CI = (2.66,15.95)], and the family physician made home visits [OR = 4.79; 95% CI = (1.97,11.64)]. However the odds were lower for patients who had caregivers with fair to poor health status [OR = 0.22; 95% CI = (0.07, 0.65)] and for patients who used hospital palliative care beds [OR = 0.31; 95% CI = (0.12, 0.80)]. Discussion: The findings suggest that individuals who indicated a preference to die at home and resided with a healthy informal caregiver had better odds of dying at home. Home visits by a family physician were also associated with dying at home.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Policymakers have largely replaced Single Bounded Discrete Choice (SBDC) valuation by the more statistically efficient repetitive methods; Double Bounded Discrete Choice (DBDC) and Discrete Choice Experiments (DCE) . Repetitive valuation permits classification into rational preferences: (i) a priori well-formed; (ii) consistent non-arbitrary values “discovered” through repetition and experience; (Plott, 1996; List 2003) and irrational preferences; (iii) consistent but arbitrary values as “shaped” by preceding bid level (Tufano, 2010; Ariely et al., 2003) and (iv) inconsistent and arbitrary values. Policy valuations should demonstrate behaviorally rational preferences. We outline novel methods for testing this in DBDC applied to renewable energy premiums in Chile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El trasplante de órganos y/o tejidos es considerado como una opción terapéutica viable para el tratamiento tanto de enfermedades crónicas o en estadios terminales, como de afectaciones no vitales, pero que generen una disminución en la calidad de vida percibida por el paciente. Este procedimiento, de carácter multidimensional, está compuesto por 3 actores principales: el donante, el órgano/tejido, y el receptor. Si bien un porcentaje significativo de investigaciones y planes de intervención han girado en torno a la dimensión biológica del trasplante, y a la promoción de la donación; el interés por la experiencia psicosocial y la calidad de vida de los receptores en este proceso ha aumentado durante la última década. En relación con esto, la presente monografía se plantea como objetivo general la exploración de la experiencia y los significados construidos por los pacientes trasplantados, a través de una revisión sistemática de la literatura sobre esta temática. Para ello, se plantearon unos objetivos específicos derivados del general, se seleccionaron términos o palabras claves por cada uno de estos, y se realizó una búsqueda en 5 bases de datos para revistas indexadas: Ebsco Host (Academic Search; y Psychology and Behavioral Sciences Collection); Proquest; Pubmed; y Science Direct. A partir de los resultados, se establece que si bien la vivencia de los receptores ha comenzado a ser investigada, aún es necesaria una mayor exploración sobre la experiencia de estos pacientes; exploración que carecería de objetivo si no se hiciera a través de las narrativas o testimonios de los mismos receptores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

G3B3 and G2MP2 calculations using Gaussian 03 have been carried out to investigate the protonation preferences for phenylboronic acid. All nine heavy atoms have been protonated in turn. With both methodologies, the two lowest protonation energies are obtained with the proton located either at the ipso carbon atom or at a hydroxyl oxygen atom. Within the G3B3 formalism, the lowest-energy configuration by 4.3 kcal . mol(-1) is found when the proton is located at the ipso carbon, rather than at the electronegative oxygen atom. In the resulting structure, the phenyl ring has lost a significant amount of aromaticity. By contrast, calculations with G2MP2 show that protonation at the hydroxyl oxygen atom is favored by 7.7 kcal . mol(-1). Calculations using the polarizable continuum model (PCM) solvent method also give preference to protonation at the oxygen atom when water is used as the solvent. The preference for protonation at the ipso carbon found by the more accurate G3B3 method is unexpected and its implications in Suzuki coupling are discussed. (C) 2006 Wiley Periodicals, Inc.