850 resultados para just-about-right scale
Resumo:
Kisebb bizonytalankodás után a legtöbb közgazdászhallgató a pénz funkcióinak felsorolásába kezd, ha megkérdezik, hogyan határozná meg a pénz fogalmát. A gyakorlatiasabb, vagy a számvitel iránt elkötelezettebb diákok esetleg felidézik a banki mérleget és – részben helyesen – a pénzt kötelezettségként helyezik el benne. Mintha azonban még mindig egy kicsit pironkodnánk, hogy nem találjuk a megfelelő definíciót. És ez már így megy évszázadok óta. Jelen tanulmányban két XIX. századi közgazdász – Karl Marx és Karl Menger – néhány pénzelméleti következtetését igyekszem összehasonlítani, figyelembe véve az általuk képviselt közgazdasági elmélet alapvető eltéréseit. A mára általánosan elfogadottá váló szubjektív értékelmélet és a kissé elfeledett munkaérték-elmélet látszólag teljesen eltérő feltevéseire alapozva a két gondolkodó egészen hasonló eredményre jutott. Számukra a pénz nem egy egyszerű eszköz, sem követelés és kötelezettség, ahogyan most elkönyvelnénk, hanem áru. Eredetét nem állami törvényekből vezetik le, hanem társadalmi konszenzus során létrejött jelenségnek tekintik a pénzt, ami fölötte áll a törvényeknek, eredendően nem jelképet testesít meg, hanem különleges jószágként válik alkalmassá értékjel kifejezésére. / === / If being asked how to define money most students of economics would start listing the functions of money, or those students with more practical insight would place money as liability in the balance sheet of banks. It seems, however, as if we were still embarrassed by not finding the right definition. In the present study I am endeavouring to give a brief overview of various theoretical findings on the essence of money in the economy preceding the 19th century and then compare some money theoretical conclusions of two economists – Karl Marx and Karl Menger – considering the major differences of the economic theories represented by them. On the basis of the premises of the widely accepted subjective value theory and the somewhat forgotten labour theory of value the two 19th century thinkers came to rather similar results. For them money is not a simple means of payment, nor liability or claim, the way we would account for them now, but a special commodity. They do not attach its creation to the appearance of state laws on money as a legal tender but regard it as a social phenomenon which became capable of expressing a value token due to its peculiar characteristics.
Resumo:
A címben említett három fogalom a közgazdasági elméletben központi szerepet foglal el. Ezek viszonya elsősorban a közgazdaságtudományi megismerés határait feszegeti. Mit tudunk a gazdasági döntésekről? Milyen információk alapján születnek a döntések? Lehet-e a gazdasági döntéseket „tudományos” alapra helyezni? A bizonytalanság kérdéséről az 1920-as években való megjelenése óta mindent elmondtak. Megvizsgálták a kérdést filozófiailag, matematikailag. Tárgyalták a kérdés számtalan elméleti és gyakorlati aspektusát. Akkor miért kell sokadszorra is foglalkozni a témával? A válasz igen egyszerű: azért, mert a kérdés minden szempontból ténylegesen alapvető, és mindenkor releváns. Úgy hírlik, hogy a római diadalmenetekben a győztes szekerén mindig volt egy rabszolga is, aki folyamatosan figyelmeztette a diadaltól megmámorosodott vezért, hogy ő is csak egy ember, ezt ne feledje el. A gazdasági döntéshozókat hasonló módon újra és újra figyelmeztetni kell arra, hogy a gazdasági döntések a bizonytalanság jegyében születnek. A gazdasági folyamatok megérthetőségének és kontrollálhatóságának van egy igen szoros korlátja. Ezt a korlátot a folyamatok inherens bizonytalansága adja. A gazdasági döntéshozók fülébe folyamatosan duruzsolni kell: ők is csak emberek, és ezért ismereteik igen korlátozottak. A „bátor” döntések során az eredmény bizonytalan, a tévedés azonban bizonyosra vehető. / === / In the article the author presents some remarks on the application of probability theory in financial decision making. From mathematical point of view the risk neutral measures used in finance are some version of separating hyperplanes used in optimization theory and in general equilibrium theory. Therefore they are just formally a probabilities. They interpretation as probabilities are misleading analogies leading to wrong decisions.
Resumo:
The problem investigated was negative effects on the ability of a university student to successfully complete a course in religious studies resulting from conflict between the methodologies and objectives of religious studies and the student's system of beliefs. Using Festinger's theory of cognitive dissonance as a theoretical framework, it was hypothesized that completing a course with a high level of success would be negatively affected by (1) failure to accept the methodologies and objectives of religious studies (methodology), (2) holding beliefs about religion that had potential conflicts with the methodologies and objectives (beliefs), (3) extrinsic religiousness, and (4) dogmatism. The causal comparative method was used. The independent variables were measured with four scales employing Likert-type items. An 8-item scale to measure acceptance of the methodologies and objectives of religious studies and a 16-item scale to measure holding of beliefs about religion having potential conflict with the methodologies were developed for this study. These scales together with a 20-item form of Rokeach's Dogmatism Scale and Feagin's 12-item Religious Orientation Scale to measure extrinsic religiousness were administered to 144 undergraduate students enrolled in randomly selected religious studies courses at Florida International University. Level of success was determined by course grade with the 27% of students receiving the highest grades classified as highly successful and the 27% receiving the lowest grades classified as not highly successful. A stepwise discriminant analysis produced a single significant function with methodology and dogmatism as the discriminants. Methodology was the principal discriminating variable. Beliefs and extrinsic religiousness failed to discriminate significantly. It was concluded that failing to accept the methodologies and objectives of religious studies and being highly dogmatic have significant negative effects on a student's success in a religious studies course. Recommendations were made for teaching to diminish these negative effects.
Resumo:
This study examined the acceptability and utility of the content of an extensive automobile tort voir dire questionnaire in Florida Circuit Civil Court. The ultimate purpose was to find questionnaire items from established measures that have demonstrated utility in uncovering biases that were at the same time not objectionable to the courts. The survey instrument included a venireperson questionnaire that used case-specific attitudinal and personality measures as well as typical information asked about personal history. The venireperson questionnaire incorporated measures that have proven reliable in other personal injury studies (Hans, & Lofquist, 1994). In order to examine judges' ratings, the questionnaire items were grouped into eight categories. Claims Consciousness scale measures general attitudes towards making claims based on one's legal rights. Belief in a Just World measures how sympathetic the juror would be to people who have suffered injuries. Political Efficacy is another general attitude scale that identifies attitudes towards the government. Litigation Crisis scales elicits attitudes about civil litigation. Case Specific Beliefs about Automobile Accidents and Litigation were taken from questionnaires developed and used in auto torts and other personal injury cases. Juror's personal history was divided into Demographics and Trial Relevant Attitudes. Ninety-seven circuit civil judges critiqued the questionnaire for acceptability, relevance to the type of case presented, and usefulness to attorneys for determining peremptories. ^ The majority of judges' responses confirmed that the central dimension in judicial thinking is juror qualification rather than juror partiality. Only three of the eight voir dire categories were considered relevant by more than 50 percent of the judges: Trial Relevant Experiences, Juror Demographics, and Tort Reform. Additionally, several acceptable items from generally disapproved categories were identified among the responses. These were general and case specific attitudinal items that are related to tort reform. We discuss the utility of voir dire items for discerning juror partiality. ^
Resumo:
Internet Protocol Television (IPTV) is a system where a digital television service is delivered by using Internet Protocol over a network infrastructure. There is considerable confusion and concern about the IPTV, since two different technologies have to be mended together to provide the end customers with some thing better than the conventional television. In this research, functional architecture of the IPTV system was investigated. Very Large Scale Integration based system for streaming server controller were designed and different ways of hosting a web server which can be used to send the control signals to the streaming server controller were studied. The web server accepts inputs from the keyboard and FPGA board switches and depending on the preset configuration the server will open a selected web page and also sends the control signals to the streaming server controller. It was observed that the applications run faster on PowerPC since it is embedded into the FPGA. Commercial market and Global deployment of IPTV were discussed.
Resumo:
The Deccan Trap basalts are the remnants of a massive series of lava flows that erupted at the K/T boundary and covered 1-2 million km2 of west-central India. This eruptive event is of global interest because of its possible link to the major mass extinction event, and there is much debate about the duration of this massive volcanic event. In contrast to isotopic or paleomagnetic dating methods, I explore an alternative approach to determine the lifecycle of the magma chambers that supplied the lavas, and extend the concept to obtain a tighter constraint on Deccan’s duration. My method relies on extracting time information from elemental and isotopic diffusion across zone boundaries in individual crystals. I determined elemental and Sr-isotopic variations across abnormally large (2-5 cm) plagioclase crystals from the Thalghat and Kashele “Giant Plagioclase Basalts” from the lowermost Jawhar and Igatpuri Formations respectively in the thickest Western Ghats section near Mumbai. I also obtained bulk rock major, trace and rare earth element chemistry of each lava flow from the two formations. Thalghat flows contain only 12% zoned crystals, with 87 Sr/86Sr ratios of 0.7096 in the core and 0.7106 in the rim, separated by a sharp boundary. In contrast, all Kashele crystals have a wider range of 87Sr/86Sr values, with multiple zones. Geochemical modeling of the data suggests that the two types of crystals grew in distinct magmatic environments. Modeling intracrystalline diffusive equilibration between the core and rim of Thalghat crystals led me to obtain a crystal growth rate of 2.03x10-10 cm/s and a residence time of 780 years for the crystals in the magma chamber(s). Employing some assumptions based on field and geochronologic evidence, I extrapolated this residence time to the entire Western Ghats and obtained an estimate of 25,000–35,000 years for the duration of Western Ghats volcanism. This gave an eruptive rate of 30–40 km3/yr, which is much higher than any presently erupting volcano. This result will remain speculative until a similarly detailed analytical-modeling study is performed for the rest of the Western Ghats formations.
Resumo:
Professional standards of ethics proclaim the core values of a profession, describe expected professional duties and responsibilities, and provide a framework for ethical practice and ethical decision-making. The purpose of this mixed, quantitative and qualitative, survey study was to examine HRD professionals' perceptions about the AHRD Standards on Ethics and Integrity, how HRD professionals used the Standards for research and decision-making, and the extent to which the Standards provided guidance for ethical decision-making. Through an on-line survey instrument, 182 members of AHRD were surveyed. The open-ended questions were analyzed using thematic analysis to expand on, inform, and support the quantitative findings. The close-ended questions were analyzed with frequency distributions, descriptive statistics, cross tabulations, and Spearman rank correlations. The results showed a significant relationship between (a) years of AHRD membership and level of familiarity with the Standards, (b) years of AHRD membership and use of the Standards for research, and (c) level of familiarity with the Standards and use of the Standards for research. There were no significant differences among scholars, scholar practitioners, practitioners, and students regarding their perceptions about the Standards. The results showed that the Standards were not well known or widely used. Nevertheless, the results indicated overall positive perceptions about the Standards. Seventy percent agreed that the Standards provided an appropriate set of ethical principles and reflected respondents' own standards of conduct. Seventy-eight percent believed that the Standards were important for defining HRD as a profession and 54% believed they were important for developing a sense of belonging to the HRD profession. Fifty-one percent believed the Standards should be enforceable and 61% agreed members should sign the membership application form showing willingness to adhere to the Standards. Seventy-seven percent based work-related ethical decisions on personal beliefs of right and wrong and 56% on established professional values and rules of right and wrong. The findings imply that if the professional standards of ethics are to influence the profession, they should be widely publicized and discussed among members, they should have some binding power, and their use should be encouraged.
Resumo:
We developed a conceptual ecological model (CEM) for invasive species to help understand the role invasive exotics have in ecosystem ecology and their impacts on restoration activities. Our model, which can be applied to any invasive species, grew from the eco-regional conceptual models developed for Everglades restoration. These models identify ecological drivers, stressors, effects and attributes; we integrated the unique aspects of exotic species invasions and effects into this conceptual hierarchy. We used the model to help identify important aspects of invasion in the development of an invasive exotic plant ecological indicator, which is described a companion paper in this special issue journal. A key aspect of the CEM is that it is a general ecological model that can be tailored to specific cases and species, as the details of any invasion are unique to that invasive species. Our model encompasses the temporal and spatial changes that characterize invasion, identifying the general conditions that allow a species to become invasive in a de novo environment; it then enumerates the possible effects exotic species may have collectively and individually at varying scales and for different ecosystem properties, once a species becomes invasive. The model provides suites of characteristics and processes, as well as hypothesized causal relationships to consider when thinking about the effects or potential effects of an invasive exotic and how restoration efforts will affect these characteristics and processes. In order to illustrate how to use the model as a blueprint for applying a similar approach to other invasive species and ecosystems, we give two examples of using this conceptual model to evaluate the status of two south Florida invasive exotic plant species (melaleuca and Old World climbing fern) and consider potential impacts of these invasive species on restoration.
Resumo:
The traditional brand management in the hotel industry is facing a great challenge as numerous brands provide many choices to hotel guests. In such competitive environments, hotel firms realize that capitalizing on one of the most important assests they own- the brand- is critical to achieve a premier growth goal not only rapidly but also in a cost- effective way. THe purpose of this study is to examine the determinants of cutsomer-based hotel brand equity for the mid-priced U.S. lodging segment by assessing the impacts of four-widely accepted brand equity dimensions: brand awareness, brand associations, percieved quality and customer loyalty. 277 travelers participated in this study at the airport in a Midwestern city. Perceived quality, brand loyalty, brand associations were found to be the core components of brand equity, while brand awareness, a seemingly important source of brand equity, did not exert a significant influence on building brand equity of mid-priced hotels. The result of this study sheds insight about how to create, manage, and evaluate a distinctive and successful hotel brand.
Resumo:
In the new health paradigm, the connotation of health has extended beyond the measures of morbidity and mortality to include wellness and quality of life. Comprehensive assessments of health go beyond traditional biological indicators to include measures of physical and mental health status, social role-functioning, and general health perceptions. To meet these challenges, tools for assessment and outcome evaluation are being designed to collect information about functioning and well-being from the individual's point of view.^ The purpose of this study was to profile the physical and mental health status of a sample of county government employees against U.S. population norms. A second purpose of the study was to determine if significant relationships existed between respondent characteristics and personal health practices, lifestyle and other health how the tools and methods used in this investigation can be used to guide program development and facilitate monitoring of health promotion initiatives.^ The SF-12 Health Survey (Ware, Kosinski, & Keller, 1995), a validated measure of health status, was administered to a convenience sample of 450 employees attending one of nine health fairs at an urban worksite. The instrument has been utilized nationally which enabled a comparative analysis of findings of this study with national results.^ Results from this study demonstrated that several respondent characteristics and personal health practices were associated with a greater percentage of physical and/or mental scale scores that were significantly "worse" or significantly "better" than the general population. Respondent characteristics that were significantly related to the SF-12 physical and/or mental health scale scores were gender, age, education, ethnicity, and income status. Personal health practices that were significantly related to SF-12 physical and/or mental scale scores were frequency of vigorous exercise, presence of chronic illness, being at one's prescribed height and weight, eating breakfast, smoking and drinking status. This study provides an illustration of the methods used to analyze and interpret SF-12 Health Survey data, using norm-based interpretation guidelines which are useful for purposes of program development and collecting information on health at the community level. ^
Resumo:
Public Law 102-119 (Individuals with Disabilities Education Act of 1991), mandates that family members, if they wish, participate in developing a plan of treatment for their child. Traditionally, therapist have not relied on parental assessments based upon the assumption that parents overestimate their child's abilities. The present study compared parental perceptions about the developmental status of their child's fine motor abilities to the therapist's interpretation of a standardized assessment using the Peabody Developmental Motor Scale (Fine Motor). Thirty seven children, enrolled in an early intervention program, and their parents were recruited for the study. The results indicated that the parents and the therapist estimates were highly correlated and showed no significant differences when paired t-tests were computed for developmental ages and scaled scores. However, analyses of variances were significantly correlated for gender and number of siblings.
Resumo:
This study aims to analyze citizen participation in state policy decisions, as an essential element of legitimacy in the branches of government, especially in the sphere of the Executive, in the context of deliberative democracy. But, this study still has the desideratum to understand the citizen's role in public life, especially in the sphere of the Executive Branch, in order to effect the Fundamental Right to Public Administration proba, efficient and honest. Thus, to achieve this mister, the proposal is to expose the pesamento the classic contractualist, Thomas Hobbes, John Locke and Rousseau about the legitimacy of governments, through the statutes, and the question of the general will and majority rule as well how to present the comments of Thomas Jefferson on popular sovereignty and dialogical citizen participation in matters of local interest. After, it will be studied the theories of Fundamental Rights in order to demonstrate the need for the Civil Service should be veiled in a more specific custody rights, given the deep crisis in the Public Administrative practice due, especially, corruption. On the other side, the fundamentality of management also covers the aspect of the development of cities, which decisively affects the development of man, which, to join a deliberative governance program needs to be politicized, adopting full participation, dialogue, as duty citizen. Furthermore, taking as most heart, will be presented the doctrine of Jürgen Habermas, whose Discourse Theory element is to be followed for the implementation of a This study aims to analyze citizen participation in state policy decisions, as an essential element of legitimacy in the branches of the government, especially in the sphere of the Executive, in the context of deliberative democracy. But, this study also has the desideratum to understand the citizen's role in public life, especially in the sphere of the Executive Branch, in order to actualize the Fundamental Right to a just, efficient and honest Public Administration. Thus, to achieve this necessity, the proposal is to expose the thought of the classic contractualist thinkers, Thomas Hobbes, John Locke and Rousseau about the legitimacy of governments, through the statutes, and the question of the general will and majority rule as well as how to present the comments of Thomas Jefferson on popular sovereignty and dialogical citizen participation in matters of local interest. Later on, the theories of Fundamental Rights will be studied in order to demonstrate that the need for the Civil Service should be veiled in a more specific right custody, given the deep crisis in the Public Administrative practice due to, especially, the corruption. On the other hand, the fundamentality of management also covers the aspect of the development of cities, which decisively affects the development of man, who, to join a deliberative governance program, needs to be politicized, adopting full participation and dialogue as a citizen responsibility. Furthermore, taking as the major heart, it will be presented the doctrine of Jürgen Habermas whose Discourse Theory element is to be followed for the implementation of a broad deliberative and emancipatory democracy, with effective citizen participation. It will also be considered the Condorcet Constitution Project as a comparative link in the linking of the public deliberative will, and the Central Power, in the face of the Theory of “Sluice” Habermas. The proposal, based on communicative action, must allow a continuous flux and influx process of social interests towards the exercise of administrative power. The dialogical deal, brought to the center of the decisions, will allow discussions in the public scope, and may contribute to the legitimacy of government actions, inasmuch as it creates the feeling of politicization demanded by the man in a democratic state.
Resumo:
The goal of this thesis was to develop, construct, and validate the Perceived Economic Burden scale to quantitatively measure the burden associated with a subtype Arrhythmogenic Right Ventricular Cardiomyopathy (ARVC) in families from the island of Newfoundland. An original 76 item self-administered survey was designed using content from existing literature as well as themes from qualitative research conducted by our team and distributed to individuals of families known to be at risk for the disease. A response rate of 37.2% (n = 64) was achieved between December 2013 and May 2014. Tests for data quality, Likert scale assumptions and scale reliability were conducted and provided preliminary evidence of the psychometric properties of the final constructed perceived economic burden of ARVC scale comprising 62 items in five sections. Findings indicated that being an affected male was a significant predictor of increased perceived economic burden in the majority of economic burden measures. Affected males also reported an increased likelihood of going on disability and difficulty obtaining insurance. Affected females also had an increased perceived financial burden. Preliminary results suggest that a perceived economic burden exists within the ARVC population in Newfoundland.
Resumo:
Transects of a Remotely Operated Vehicle (ROV) providing sea-bed videos and photographs were carried out during POLARSTERN expedition ANT-XVII/3 focussing on the ecology of benthic assemblages on the Antarctic shelf in the South-Eastern Weddell Sea. The ROV-system sprint 103 was equiped with two video- and one still camera, lights, flash-lights, compass, and parallel lasers providing a scale in the images, a tether-management system (TMS), a winch, and the board units. All cameras used the same main lense and could be tilted. Videos were recorded in Betacam-format and (film-)slides were made by decision of the scientific pilot. The latter were mainly made under the aspect to improve the identification of organisms depicted in the videos because the still photographs have a much higher optical resolution than the videos. In the photographs species larger than 3 mm, in the videos larger than 1 cm are recognisable and countable. Under optimum conditions the transects were strait; the speed and direction of the ROV were determined by the drift of the ship in the coastal current, since both, the ship and the ROV were used as a drifting system; the option to operate the vehicle actively was only used to avoide obstacles and to reach at best a distance of only approximately 30 cm to the sea-floor. As a consequence the width of the photographs in the foreground is approximately 50 cm. Deviations from this strategy resulted mainly from difficult ice- and weather conditions but also from high current velocity and local up-welling close to the sea-bed. The sea-bed images provide insights into the general composition of key species, higher systematic groups and ecological guilds. Within interdisciplinary approaches distributions of assemblages can be attributed to environmental conditions such as bathymetry, sediment characteristics, water masses and current regimes. The images also contain valuable information on how benthic species are associated to each other. Along the transects, small- to intermediate-scaled disturbances, e.g. by grounding icebergs were analysed and further impact to the entire benthic system by local succession of recolonisation was studied. This information can be used for models predicting the impact of climate change to benthic life in the Southern Ocean. All these approaches contribute to a better understanding of the fiunctioning of the benthic system and related components of the entire Antarctic marine ecosystem. Despite their scientific value the imaging methods meet concerns about the protection of sensitive Antarctic benthic systems since they are non-invasive and they also provide valuable material for education and outreach purposes.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.