73 resultados para evidence-in-chief
Resumo:
This study examines the contradictory predictions regarding the association between the premium paid in acquisitions and deal size. We document a robust negative relation between offer premia and target size, indicating that acquirers tend to pay less for large firms, not more. We also find that the overpayment potential is lower in acquisitions of large targets. Yet, they still destroy more value for acquirers around deal announcements, implying that target size may proxy, among others, for the unobserved complexity inherent in large deals. We provide evidence in favor of this interpretation.
Resumo:
We studied the effect of tactile double simultaneous stimulation (DSS) within and between hands to examine spatial coding of touch at the fingers. Participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger, either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger=homologous; left middle finger=non-homologous). Across blocks we also changed the unseen hands posture (both hands palm down, or one hand rotated palm-up). When both hands were palm-down DSS interference effects emerged both within and between hands, but only when the non-homologous finger served as non-target. This suggests a clear segregation between the fingers of each hand, regardless of finger side. By contrast, when one hand was palm-up interference effects emerged only within hand, whereas between hands DSS interference was considerably reduced or absent. Thus, between hands interference was clearly affected by changes in hands posture. Taken together, these findings provide behavioral evidence in humans for multiple spatial coding of touch during tactile DSS at the fingers. In particular, they confirm the existence of representational stages of touch that distinguish between body-regions more than body-sides. Moreover, they show that the availability of tactile stimulation side becomes prominent when postural update is required.
Resumo:
J.L. Austin is regarded as having an especially acute ear for fine distinctions of meaning overlooked by other philosophers. Austin employs an informal experimental approach to gathering evidence in support of these fine distinctions in meaning, an approach that has become a standard technique for investigating meaning in both philosophy and linguistics. In this paper, we subject Austin's methods to formal experimental investigation. His methods produce mixed results: We find support for his most famous distinction, drawn on the basis of his `donkey stories', that `mistake' and `accident' apply to different cases, but not for some of his other attempts to distinguish the meaning of philosophically significant terms (such as `intentionally' and `deliberately'). We critically examine the methodology of informal experiments employed in ordinary language philosophy and much of contemporary philosophy of language and linguistics, and discuss the role that experimenter bias can play in influencing judgments about informal and formal linguistic experiments.
Resumo:
In this paper we discuss the current state-of-the-art in estimating, evaluating, and selecting among non-linear forecasting models for economic and financial time series. We review theoretical and empirical issues, including predictive density, interval and point evaluation and model selection, loss functions, data-mining, and aggregation. In addition, we argue that although the evidence in favor of constructing forecasts using non-linear models is rather sparse, there is reason to be optimistic. However, much remains to be done. Finally, we outline a variety of topics for future research, and discuss a number of areas which have received considerable attention in the recent literature, but where many questions remain.
Resumo:
Early modern play-texts present numerous puzzles for scholars interested in ascertaining how plays were (or may have been) staged. the principal evidence of course for a notional "reconstruction" of practices is the apparatus of stage directions, augmented by indications in the dialogue. in conjunction a joining-of-the-dots is often possible, at least in broad-brush terms. But as is well known, the problem is that stage directions tend to be incomplete, imprecise, inaccurate or missing altogether; more significantly, even when present they offer only slight and indirect evidence of actual stagecraft. Some stage directions are rather more "literary" than "theatrical" in provenance, and in any case to the extent that they do serve the reader (early modern or modern) they cannot be regarded as providing a record of stage practice. After all, words can be no more than imperfect substitutes for (and of another order from) the things they represent. For the most part directions serve as a guide that provides the basis for reasonable interpretation informed by our knowledge of theatre architecture, technology, and comparable play-situations, rather than concrete evidence of actual practice. Quite how some stage business was carried out remains uncertain, leaving the scholar little option but to hypothesize solutions. One such conundrum arises in christopher Marlowe's The Jew of Malta. the scenario in question is hardly an obscure one, but it has not been examined in detail, even by modern editors. the purpose of this essay is to explore what sense might be made of the surviving textual evidence, in combination with our knowledge of theatre architecture and playmaking culture in the late sixteenth century.
Resumo:
Theca cells are essential for female reproduction being the source of androgens that are precursors for follicular oestrogen synthesis and also signal through androgen receptors (AR) in the ovary and elsewhere. Theca cells arise from mesenchymal cells around the secondary follicle stage. Their recruitment, proliferation and cytodifferentiation are influenced, directly or indirectly, by paracrine signals from granulosa cells and oocyte although uncertainty remains over which are the critically important signals at particular stages. In a reciprocal manner, theca cells secrete factors that influence granulosa cell proliferation and differentiation at different follicle stages. Differentiated theca interna cells acquire responsiveness to luteinizing hormone (LH) and other endocrine signals and express components of the steroidogenic machinery required for androgen biosynthesis. They also express insulin-like peptide 3 (INSL3) and its receptor (RXFP2), levels of which increase during bovine antral follicle development. INSL3 signaling may play a role in promoting androgen biosynthesis since knockdown of either INSL3 or its receptor (RXFP2) in bovine theca cells inhibits androgen biosynthesis while exogenous INSL3 can raise androgen secretion. Bone morphogenetic proteins (BMPs) of thecal or granulosal origin suppress thecal production of both INSL3 and androgen. Inhibin, produced in greatest amounts by granulosa cells of preovulatory follicles, reverses these BMP actions. Thus, BMP-induced inhibition of thecal androgen production may be mediated by reduced INSL3-RXFP2 signaling. Activins also inhibit androgen production in an inhibin-reversible manner and recent evidence in sheep indicates that theca cells synthesize and secrete activin, implying an autocrine role in suppressing androgen biosynthesis in smaller follicles, akin to that envisaged for BMPs.
Resumo:
In the resource-based view, organisations are represented by the sum of their physical, human and organisational assets, resources and capabilities. Operational capabilities maintain the status quo and allow an organisation to execute their existing business. Dynamic capabilities, otherwise, allow an organisation to change this status quo including a change of the operational ones. Competitive advantage, in this context, is an effect of continuously developing and reconfiguring these firm-specific assets through dynamic capabilities. Deciding where and how to source the core operational capabilities is a key success factor. Furthermore, developing its dynamic capabilities allows an organisation to effectively manage change its operational capabilities. Many organisations are asserted to have a high dependency on - as well as a high benefit from - the use of information technology (IT), making it a crucial and overarching resource. Furthermore, the IT function is assigned the role as a change enabler and so IT sourcing affects the capability of managing business change. IT sourcing means that organisations need to decide how to source their IT capabilities. Outsourcing of parts of the IT function will also outsource some of the IT capabilities and therefore some of the business capabilities. As a result, IT sourcing has an impact on the organisation's capabilities and consequently on the business success. And finally, a turbulent and fast moving business environment challenges organisations to effectively and efficiently managing business change. Our research builds on the existing theory of dynamic and operational capabilities by considering the interdependencies between the dynamic capabilities of business change and IT sourcing. Further it examines the decision-making oversight of these areas as implemented through IT governance. We introduce a new conceptual framework derived from the existing theory and extended through an illustrative case study conducted in a German bank. Under a philosophical paradigm of constructivism, we collected data from eight semi-structured interviews and used additional sources of evidence in form of annual accounts, strategy papers and IT benchmark reports. We applied an Interpretative Phenomenological Analysis (IPA), which emerged the superordinate themes for our tentative capabilities framework. An understanding of these interdependencies enables scholars and professionals to improve business success through effectively managing business change and evaluating the impact of IT sourcing decisions on the organisation's operational and dynamic capabilities.
Resumo:
Combined observations by meridian-scanning photometers, all-sky auroral TV camera and the EISCAT radar permitted a detailed analysis of the temporal and spatial development of the midday auroral breakup phenomenon and the related ionospheric ion flow pattern within the 71°–75° invariant latitude radar field of view. The radar data revealed dominating northward and westward ion drifts, of magnitudes close to the corresponding velocities of the discrete, transient auroral forms, during the two different events reported here, characterized by IMF |BY/BZ| < 1 and > 2, respectively (IMF BZ between −8 and −3 nT and BY > 0). The spatial scales of the discrete optical events were ∼50 km in latitude by ∼500 km in longitude, and their lifetimes were less than 10 min. Electric potential enhancements with peak values in the 30–50 kV range are inferred along the discrete arc in the IMF |BY/BZ| < 1 case from the optical data and across the latitudinal extent of the radar field of view in the |BY/BZ| > 2 case. Joule heat dissipation rates in the maximum phase of the discrete structures of ∼ 100 ergs cm−2 s−1 (0.1 W m−2) are estimated from the photometer intensities and the ion drift data. These observations combined with the additional characteristics of the events, documented here and in several recent studies (i.e., their quasi-periodic nature, their motion pattern relative to the persistent cusp or cleft auroral arc, the strong relationship with the interplanetary magnetic field and the associated ion drift/E field events and ground magnetic signatures), are considered to be strong evidence in favour of a transient, intermittent reconnection process at the dayside magnetopause and associated energy and momentum transfer to the ionosphere in the polar cusp and cleft regions. The filamentary spatial structure and the spectral characteristics of the optical signature indicate associated localized ˜1-kV potential drops between the magnetopause and the ionosphere during the most intense auroral events. The duration of the events compares well with the predicted characteristic times of momentum transfer to the ionosphere associated with the flux transfer event-related current tubes. It is suggested that, after this 2–10 min interval, the sheath particles can no longer reach the ionosphere down the open flux tube, due to the subsequent super-Alfvénic flow along the magnetopause, conductivities are lower and much less momentum is extracted from the solar wind by the ionosphere. The recurrence time (3–15 min) and the local time distribution (∼0900–1500 MLT) of the dayside auroral breakup events, combined with the above information, indicate the important roles of transient magnetopause reconnection and the polar cusp and cleft regions in the transfer of momentum and energy between the solar wind and the magnetosphere.
Resumo:
In a previous article, I wrote a brief piece on how to enhance papers that have been published at one of the IEEE Consumer Electronics (CE) Society conferences to create papers that can be considered for publishing in IEEE Transactions on Consumer Electronics (T-CE) [1]. Basically, it included some hints and tips to enhance a conference paper into what is required for a full archival journal paper and not fall foul of self-plagiarism. This article focuses on writing original papers specifically for T-CE. After three years as the journal’s editor-in-chief (EiC), a previous eight years on the editorial board, and having reviewed some 4,000 T-CE papers, I decided to write this article to archive and detail for prospective authors what I have learned over this time. Of course, there are numerous articles on writing good papers—some are really useful [2], but they do not address the specific issues of writing for a journal whose topic (scope) is not widely understood or, indeed, is often misunderstood.
Resumo:
This article provides a critical and bibliographical discussion of J. M. Barrie’s neglected first book, Better Dead, published by Swan Sonnenschein, Lowrey & Co. in 1887. Drawing on previously unexamined evidence in the Sonnenschein archive, it shows how this shilling novel was marketed and sold to its readers at railway bookstalls, and argues that the content and style of the story was conditioned by its form. Examining the many references and allusions in the story, it proposes that the work is best understood as a satire on contemporary political, social and literary themes. The article also shows how, contrary to all published accounts, the author actually earned a small amount of money from a work which, in spite of his efforts, refused to stay dead.
Resumo:
A key highlight of this study is generating evidence of children ‘making aware the unaware’, making tacit knowledge explicit. The research explores the levels of awareness in thinking used by eight 7–8 year-old children when engaged in school-based genre writing tasks. The focus is on analysing children’s awareness of their thought processes, using a framework originally devised by Swartz and Perkins (1989), in order to investigate ways in which children can transform their tacit knowledge to explicit within the writing process. Classroom ‘think aloud’ protocols are used to help children ‘manage their knowledge transfer’, to speak the unspoken. In their framework Swartz and Perkins distinguish between four levels of thought that they view as hierarchical and ‘increasingly metacognitive.’ However, there is little evidence in this study to show that levels of awareness in thinking are increasingly progressive and observations made during the study suggest that young writers move in and out of the suggested levels of thinking during different elements of a writing task. The reasons for this may depend on a number of factors which are noted in this paper. Evidence does suggest children in this age group are consciously aware of their own and others’ thought processes both with and without adult prompting. By using collaborative talk, their awareness of these thought processes is highlighted enabling the co-construction and integration of new ideas into their existing knowledge base.
Resumo:
The emergence and development of digital imaging technologies and their impact on mainstream filmmaking is perhaps the most familiar special effects narrative associated with the years 1981-1999. This is in part because some of the questions raised by the rise of the digital still concern us now, but also because key milestone films showcasing advancements in digital imaging technologies appear in this period, including Tron (1982) and its computer generated image elements, the digital morphing in The Abyss (1989) and Terminator 2: Judgment Day (1991), computer animation in Jurassic Park (1993) and Toy Story (1995), digital extras in Titanic (1997), and ‘bullet time’ in The Matrix (1999). As a result it is tempting to characterize 1981-1999 as a ‘transitional period’ in which digital imaging processes grow in prominence and technical sophistication, and what we might call ‘analogue’ special effects processes correspondingly become less common. But such a narrative risks eliding the other practices that also shape effects sequences in this period. Indeed, the 1980s and 1990s are striking for the diverse range of effects practices in evidence in both big budget films and lower budget productions, and for the extent to which analogue practices persist independently of or alongside digital effects work in a range of production and genre contexts. The chapter seeks to document and celebrate this diversity and plurality, this sustaining of earlier traditions of effects practice alongside newer processes, this experimentation with materials and technologies old and new in the service of aesthetic aspirations alongside budgetary and technical constraints. The common characterization of the period as a series of rapid transformations in production workflows, practices and technologies will be interrogated in relation to the persistence of certain key figures as Douglas Trumbull, John Dykstra, and James Cameron, but also through a consideration of the contexts for and influences on creative decision-making. Comparative analyses of the processes used to articulate bodies, space and scale in effects sequences drawn from different generic sites of special effects work, including science fiction, fantasy, and horror, will provide a further frame for the chapter’s mapping of the commonalities and specificities, continuities and variations in effects practices across the period. In the process, the chapter seeks to reclaim analogue processes’ contribution both to moments of explicit spectacle, and to diegetic verisimilitude, in the decades most often associated with the digital’s ‘arrival’.
Resumo:
Cardiovascular diseases (CVD) are the leading cause of mortality and morbidity worldwide. One of the key dietary recommendations for CVD prevention is reduction of saturated fat intake. Yet despite milk and dairy foods contributing on average 27 % of saturated fat intake in the UK diet, evidence from prospective cohort studies does not support a detrimental effect of milk and dairy foods on risk of CVD. This paper provides a brief overview of the role of milk and dairy products in the diets of UK adults, and will summarise the evidence in relation to the effects of milk and dairy consumption on CVD risk factors and mortality. The majority of prospective studies and meta-analyses examining the relationship between milk and dairy product consumption and risk of CVD show that milk and dairy products, excluding butter, are not associated with detrimental effects on CVD mortality or risk biomarkers, that include serum LDL cholesterol. In addition, there is increasing evidence that milk and dairy products are associated with lower blood pressure and arterial stiffness. These apparent benefits of milk and dairy foods have been attributed to their unique nutritional composition, and suggest that the elimination of milk and dairy may not be the optimum strategy for CVD risk reduction.