988 resultados para Gaussian assumption
Resumo:
The Hausman (1978) test is based on the vector of differences of two estimators. It is usually assumed that one of the estimators is fully efficient, since this simplifies calculation of the test statistic. However, this assumption limits the applicability of the test, since widely used estimators such as the generalized method of moments (GMM) or quasi maximum likelihood (QML) are often not fully efficient. This paper shows that the test may easily be implemented, using well-known methods, when neither estimator is efficient. To illustrate, we present both simulation results as well as empirical results for utilization of health care services.
Resumo:
One controversial idea present in the debate on urban sustainability is that urban sprawl is an ecological stressing problem. We have tested this popular assumption by measuring the ecological footprint of commuting and housing of the 163 municipalities of the Barcelona Metropolitan Region and by relating the estimated values with residential density and accessibility, the fundamental determinant of residential density according to the Monocentric City Model.
Resumo:
Planar polynomial vector fields which admit invariant algebraic curves, Darboux integrating factors or Darboux first integrals are of special interest. In the present paper we solve the inverse problem for invariant algebraic curves with a given multiplicity and for integrating factors, under generic assumptions regarding the (multiple) invariant algebraic curves involved. In particular we prove, in this generic scenario, that the existence of a Darboux integrating factor implies Darboux integrability. Furthermore we construct examples where the genericity assumption does not hold and indicate that the situation is different for these.
Resumo:
To allow society to treat unequal alternatives distinctly we propose a natural extension of Approval Voting by relaxing the assumption of neutrality. According to this extension, every alternative receives ex-ante a non-negative and finite weight. These weights may differ across alternatives. Given the voting decisions of every individual (individuals are allowed to vote for, or approve of, as many alternatives as they wish to), society elects all alternatives for which the product of total number of votes times exogenous weight is maximal. Our main result is an axiomatic characterization of this voting procedure.
Resumo:
A partir de les fonts documentals de la “Casa Misericòrdia” i de “Casa Caritat” de Vic i també del testimoni de persones que hi havien viscut o hi havien treballat o les coneixien de prop, s’ha volgut explicar el naixement, evolució i decadència de les dues institucions centenàries que van realitzar, gairebé sempre amb pocs mitjans però amb notable dedicació, una tasca ingent en favor dels més desfavorits i que van deixar d’existir a principis dels anys setanta. Casa Caritat atenia persones necessitades d’ambdós sexes, encara que en dues instal•lacions diferents, com l’antic convent dels Trinitaris per al sexe femení i l’antic convent de Sant Domènec, per al masculí. La Misericòrdia va acollir durant poc més de dos-cents cinquanta anys nenes, noies, dones i velles pobres i desemparades. La raresa d’aquesta institució estava en el règim d’autogestió, sense la intervenció de cap ordre religiosa, com era costum en aquells temps. El tancament de les dues institucions va donar lloc a un nou model assistencial i educatiu per als menors, promogut pel bisbe de Vic Ramon Masnou i per Joan Riera, els impulsors de la Llar Juvenil. Es tractava d’un recurs modern i d’inspiració cristiana, ubicat en una casa de nova construcció i sota un reglament molt més humanitzat i amb menys pes de les pràctiques religioses. La conflictivitat dels nens i nenes, a mesura que creixien, i amb les necessitats d’uns altres temps, a més d’assumpció de les competències de menors per part de la Generalitat el 1981, van propiciar la creació per part de la Generalitat i de l’Ajuntament de Vic de Casa Moreta, amb un projecte totalment laic i portat per professionals que després es convertiria en el Centre Residencial Osona i també va suposar el naixement d’una altra entitat, la Llar Terricabras. La recerca, doncs, pretén posar les bases per a futures anàlisis aprofundides sobre l’atenció als menors a la ciutat de Vic, al llarg del temps.
Resumo:
In this paper, we present a first approach to evolve a cooperative behavior in ad hoc networks. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios are unfavorable to the interests of a user. In this paper we deal with the issue of user cooperation in ad hoc networks by developing the algorithm called Generous Tit-For-Tat. We assume that nodes are rational, i.e., their actions are strictly determined by self-interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we study the added behavior of the network.
Resumo:
Despite the common assumption that orthologs usually share the same function, there have been various reports of divergence between orthologs, even among species as close as mammals. The comparison of mouse and human is of special interest, because mouse is often used as a model organism to understand human biology. We review the literature on evidence for divergence between human and mouse orthologous genes, and discuss it in the context of biomedical research.
Resumo:
Crystallization temperatures of the oceanic carbonatites of Fuerteventura, Canary Islands, have been determined from oxygen isotope fractionations between calcite, silicate minerals (feldspar, pyroxene, biotite, and zircon) and magnetite. The measured fractionations have been interpreted in the light of late stage interactions with meteoric and/or magmatic water. Cathodoluminescence characteristics were investigated for the carbonatite minerals in order to determine the extent of alteration and to select unaltered samples. Oxygen isotope fractionations of minerals of unaltered samples yield crystallization temperatures between 450 and 960degreesC (average 710degreesC). The highest temperature is obtained from pyroxene-calcite pairs. The above range is in agreement with other carbonatite thermometric Studies. This is the first study that provides oxygen isotope data coupled with a CL study on carbonatite-related zircon. The CL pictures revealed that the zircon is broken and altered in the carbonatites and in associated syenites. Regarding geological field evidences of syenite-carbonatite relationship and the close agreement of published zircon U/Pb and whole rock and biotite K/Ar and Ar-Ar age data, the most probable process is early zircon crystallization from the syenite magma and late-stage reworking during magma evolution and carbonatite segregation. The oxygen isotope fractionations between zircon and other carbonatite minerals (calcite and pyroxene) support the assumption that the zircon would correspond to the early crystallization of syenite-carbonatite magmas.
Resumo:
A growing literature integrates theories of debt management into models of optimal fiscal policy. One promising theory argues that the composition of government debt should be chosen so that fluctuations in the market value of debt offset changes in expected future deficits. This complete market approach to debt management is valid even when the government only issues non-contingent bonds. A number of authors conclude from this approach that governments should issue long term debt and invest in short term assets. We argue that the conclusions of this approach are too fragile to serve as a basis for policy recommendations. This is because bonds at different maturities have highly correlated returns, causing the determination of the optimal portfolio to be ill-conditioned. To make this point concrete we examine the implications of this approach to debt management in various models, both analytically and using numerical methods calibrated to the US economy. We find the complete market approach recommends asset positions which are huge multiples of GDP. Introducing persistent shocks or capital accumulation only worsens this problem. Increasing the volatility of interest rates through habits partly reduces the size of these simulations we find no presumption that governments should issue long term debt ? policy recommendations can be easily reversed through small perturbations in the specification of shocks or small variations in the maturity of bonds issued. We further extend the literature by removing the assumption that governments every period costlessly repurchase all outstanding debt. This exacerbates the size of the required positions, worsens their volatility and in some cases produces instability in debt holdings. We conclude that it is very difficult to insulate fiscal policy from shocks by using the complete markets approach to debt management. Given the limited variability of the yield curve using maturities is a poor way to substitute for state contingent debt. The result is the positions recommended by this approach conflict with a number of features that we believe are important in making bond markets incomplete e.g allowing for transaction costs, liquidity effects, etc.. Until these features are all fully incorporated we remain in search of a theory of debt management capable of providing robust policy insights.
Resumo:
It is often alleged that high auction prices inhibit service deployment. We investigate this claim under the extreme case of financially constrained bidders. If demand is just slightly elastic, auctions maximize consumer surplus if consumer surplus is a convex function of quantity (a common assumption), or if consumer surplus is concave and the proportion of expenditure spent on deployment is greater than one over the elasticity of demand. The latter condition appears to be true for most of the large telecom auctions in the US and Europe. Thus, even if high auction prices inhibit service deployment, auctions appear to be optimal from the consumers' point of view.
Resumo:
In order to upgrade the reliability of xenodiagnosis, attention has been directed towards population dynamics of the parasite, with particular interest for the following factors: 1. Parasite density which by itself is not a research objective, but by giving an accurate portrayal of parasite development and multiplication, has been incorporated in screening of bugs for xenodiagnosis. 2. On the assumption that food availability might increase parasite density, bugs from xenodiagnosis have been refed at biweekly intervals on chicken blood. 3. Infectivity rates and positives harbouring large parasite yields were based on gut infections, in which the parasite population comprised of all developmental forms was more abundant and easier to detect than in fecal infections, thus minimizing the probability of recording false negatives. 4. Since parasite density, low in the first 15 days of infection, increases rapidly in the following 30 days, the interval of 45 days has been adopted for routine examination of bugs from xenodiagnosis. By following the enumerated measures, all aiming to reduce false negative cases, we are getting closer to a reliable xenodiagnostic procedure. Upgrading the efficacy of xenodiagnosis is also dependent on the xenodiagnostic agent. Of 9 investigated vector species, Panstrongylus megistus deserves top priority as a xenodiagnostic agent. Its extraordinary capability to support fast development and vigorous multiplication of the few parasites, ingested from the host with chronic Chagas' disease, has been revealed by the strikingly close infectivity rates of 91.2% vs. 96.4% among bugs engorged from the same host in the chronic and acute phase of the disease respectively (Table V), the latter comporting an estimated number of 12.3 x 10[raised to the power of 3] parasites in the circulation at the time of xenodiagnosis, as reported previously by the authors (1982).
Resumo:
Background: Motive-oriented therapeutic relationship (MOTR) was postulated to be a particularly helpful therapeutic ingredient in the early treatment phase of patients with personality disorders, in particular with borderline personality disorder (BPD). The present randomized controlled study using an add-on design is the first study to test this assumption in a 10-session general psychiatric treatment with patients presenting with BPD on symptom reduction and therapeutic alliance. Methods: A total of 85 patients were randomized. They were either allocated to a manual-based short variant of the general psychiatric management (GPM) treatment (in 10 sessions) or to the same treatment where MOTR was deliberately added to the treatment. Treatment attrition and integrity analyses yielded satisfactory results. Results: The results of the intent-to-treat analyses suggested a global efficacy of MOTR, in the sense of an additional reduction of general problems, i.e. symptoms, interpersonal and social problems (F1, 73 = 7.25, p < 0.05). However, they also showed that MOTR did not yield an additional reduction of specific borderline symptoms. It was also shown that a stronger therapeutic alliance, as assessed by the therapist, developed in MOTR treatments compared to GPM (Z55 = 0.99, p < 0.04). Conclusions: These results suggest that adding MOTR to psychiatric and psychotherapeutic treatments of BPD is promising. Moreover, the findings shed additional light on the perspective of shortening treatments for patients presenting with BPD. © 2014 S. Karger AG, Basel.
Resumo:
Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.
Resumo:
This study focuses on identification and exploitation processes among Finnish design entrepreneurs (i.e. selfemployed industrial designers). More specifically, this study strives to find out what design entrepreneurs do when they create new ventures, how venture ideas are identified and how entrepreneurial processes are organized to identify and exploit such venture ideas in the given industrial context. Indeed, what does educated and creative individuals do when they decide to create new ventures, where do the venture ideas originally come from, and moreover, how are venture ideas identified and developed into viable business concepts that are introduced on the markets? From an academic perspective: there is a need to increase our understanding of the interaction between the identification and exploitation of emerging ventures, in this and other empirical contexts. Rather than assuming that venture ideas are constant in time, this study examines how emerging ideas are adjusted to enable exploitation in dynamic market settings. It builds on the insights from previous entrepreneurship process research. The interpretations from the theoretical discussion build on the assumption that the subprocesses of identification and exploitation interact, and moreover, they are closely entwined with each other (e.g. McKelvie & Wiklund, 2004, Davidsson, 2005). This explanation challenges the common assumption that entrepreneurs would first identify venture ideas and then exploit them (e.g. Shane, 2003). The assumption is that exploitation influences identification, just as identification influences exploitation. Based on interviews with design entrepreneurs and external actors (e.g. potential customers, suppliers and collaborators), it appears as identification and exploitation of venture ideas are carried out in close interaction between a number of actors, rather than alone by entrepreneurs. Due to their available resources, design entrepreneurs have a desire to focus on identification related activities and to find external actors that take care of exploitation related activities. The involvement of external actors may have a direct impact on decisionmaking and various activities along the processes of identification and exploitation, which is something that previous research does not particularly emphasize. For instance, Bhave (1994) suggests both operative and strategic feedback from the market, but does not explain how external parties are actually involved in the decisionmaking, and in carrying out various activities along the entrepreneurial process.
Resumo:
In automobile insurance, it is useful to achieve a priori ratemaking by resorting to gene- ralized linear models, and here the Poisson regression model constitutes the most widely accepted basis. However, insurance companies distinguish between claims with or without bodily injuries, or claims with full or partial liability of the insured driver. This paper exa- mines an a priori ratemaking procedure when including two di®erent types of claim. When assuming independence between claim types, the premium can be obtained by summing the premiums for each type of guarantee and is dependent on the rating factors chosen. If the independence assumption is relaxed, then it is unclear as to how the tari® system might be a®ected. In order to answer this question, bivariate Poisson regression models, suitable for paired count data exhibiting correlation, are introduced. It is shown that the usual independence assumption is unrealistic here. These models are applied to an automobile insurance claims database containing 80,994 contracts belonging to a Spanish insurance company. Finally, the consequences for pure and loaded premiums when the independence assumption is relaxed by using a bivariate Poisson regression model are analysed.