863 resultados para Demster-Schafer theory of evidence
Resumo:
Can we reconcile the predictions of the altruism model of the familywith the evidence on intervivos transfers in the US? This paper expandsthe altruism model by introducing e ?ort of the child and by relaxingthe assumption of perfect information of the parent about the labormarket opportunities of the child. First, I solve and simulate a modelof altruism under imperfect information. Second, I use cross-sectionaldata to test a prediction of the model: Are parental transfers especiallyresponsive to the income variations of children who are very attached tothe labor market? The results suggest that imperfect information accountsfor several patterns of intergenerational transfers in the US.
Resumo:
Can we reconcile the predictions of the altruism model of the family withthe evidence on parental monetary transfers in the US? This paper providesa new assessment of this question. I expand the altruism model by introducingeffort of the child and by relaxing the assumption of perfect informationof the parent about the labor market opportunities of the child. First,I solve and simulate a model of altruism and labor supply under imperfectinformation. Second, I use cross-sectional data to test the following prediction of the model: Are parental transfers especially responsive tothe income variations of children who are very attached to the labor market? The results of the analysis suggest that imperfect informationaccounts for many of the patterns of intergenerational transfers in theUS.
Resumo:
This article has an immediate predecessor, upon which it is based and with which readers must necessarily be familiar: Towards a Theory of the Credit-Risk Balance Sheet (Vallverdú, Somoza and Moya, 2006). The Balance Sheet is conceptualised on the basis of the duality of a credit-based transaction; it deals with its theoretical foundations, providing evidence of a causal credit-risk duality, that is, a true causal relationship; its characteristics, properties and its static and dynamic characteristics are analyzed. This article, which provides a logical continuation to the previous one, studies the evolution of the structure of the Credit-Risk Balance Sheet as a consequence of a business¿s dynamics in the credit area. Given the Credit-Risk Balance Sheet of a company at any given time, it attempts to estimate, by means of sequential analysis, its structural evolution, showing its usefulness in the management and control of credit and risk. To do this, it bases itself, with the necessary adaptations, on the by-now classic works of Palomba and Cutolo. The establishment of the corresponding transformation matrices allows one to move from an initial balance sheet structure to a final, future one, to understand its credit-risk situation trends, as well as to make possible its monitoring and control, basic elements in providing support for risk management.
Resumo:
This article has an immediate predecessor, upon which it is based and with which readers must necessarily be familiar: Towards a Theory of the Credit-Risk Balance Sheet (Vallverdú, Somoza and Moya, 2006). The Balance Sheet is conceptualised on the basis of the duality of a credit-based transaction; it deals with its theoretical foundations, providing evidence of a causal credit-risk duality, that is, a true causal relationship; its characteristics, properties and its static and dynamic characteristics are analyzed. This article, which provides a logical continuation to the previous one, studies the evolution of the structure of the Credit-Risk Balance Sheet as a consequence of a business¿s dynamics in the credit area. Given the Credit-Risk Balance Sheet of a company at any given time, it attempts to estimate, by means of sequential analysis, its structural evolution, showing its usefulness in the management and control of credit and risk. To do this, it bases itself, with the necessary adaptations, on the by-now classic works of Palomba and Cutolo. The establishment of the corresponding transformation matrices allows one to move from an initial balance sheet structure to a final, future one, to understand its credit-risk situation trends, as well as to make possible its monitoring and control, basic elements in providing support for risk management.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Unlike the evaluation of single items of scientific evidence, the formal study and analysis of the jointevaluation of several distinct items of forensic evidence has to date received some punctual, ratherthan systematic, attention. Questions about the (i) relationships among a set of (usually unobservable)propositions and a set of (observable) items of scientific evidence, (ii) the joint probative valueof a collection of distinct items of evidence as well as (iii) the contribution of each individual itemwithin a given group of pieces of evidence still represent fundamental areas of research. To somedegree, this is remarkable since both, forensic science theory and practice, yet many daily inferencetasks, require the consideration of multiple items if not masses of evidence. A recurrent and particularcomplication that arises in such settings is that the application of probability theory, i.e. the referencemethod for reasoning under uncertainty, becomes increasingly demanding. The present paper takesthis as a starting point and discusses graphical probability models, i.e. Bayesian networks, as frameworkwithin which the joint evaluation of scientific evidence can be approached in some viable way.Based on a review of existing main contributions in this area, the article here aims at presentinginstances of real case studies from the author's institution in order to point out the usefulness andcapacities of Bayesian networks for the probabilistic assessment of the probative value of multipleand interrelated items of evidence. A main emphasis is placed on underlying general patterns of inference,their representation as well as their graphical probabilistic analysis. Attention is also drawnto inferential interactions, such as redundancy, synergy and directional change. These distinguish thejoint evaluation of evidence from assessments of isolated items of evidence. Together, these topicspresent aspects of interest to both, domain experts and recipients of expert information, because theyhave bearing on how multiple items of evidence are meaningfully and appropriately set into context.
Resumo:
Understanding and quantifying seismic energy dissipation, which manifests itself in terms of velocity dispersion and attenuation, in fluid-saturated porous rocks is of considerable interest, since it offers the perspective of extracting information with regard to the elastic and hydraulic rock properties. There is increasing evidence to suggest that wave-induced fluid flow, or simply WIFF, is the dominant underlying physical mechanism governing these phenomena throughout the seismic, sonic, and ultrasonic frequency ranges. This mechanism, which can prevail at the microscopic, mesoscopic, and macroscopic scale ranges, operates through viscous energy dissipation in response to fluid pressure gradients and inertial effects induced by the passing wavefield. In the first part of this thesis, we present an analysis of broad-band multi-frequency sonic log data from a borehole penetrating water-saturated unconsolidated glacio-fluvial sediments. An inherent complication arising in the interpretation of the observed P-wave attenuation and velocity dispersion is, however, that the relative importance of WIFF at the various scales is unknown and difficult to unravel. An important generic result of our work is that the levels of attenuation and velocity dispersion due to the presence of mesoscopic heterogeneities in water-saturated unconsolidated clastic sediments are expected to be largely negligible. Conversely, WIFF at the macroscopic scale allows for explaining most of the considered data while refinements provided by including WIFF at the microscopic scale in the analysis are locally meaningful. Using a Monte-Carlo-type inversion approach, we compare the capability of the different models describing WIFF at the macroscopic and microscopic scales with regard to their ability to constrain the dry frame elastic moduli and the permeability as well as their local probability distribution. In the second part of this thesis, we explore the issue of determining the size of a representative elementary volume (REV) arising in the numerical upscaling procedures of effective seismic velocity dispersion and attenuation of heterogeneous media. To this end, we focus on a set of idealized synthetic rock samples characterized by the presence of layers, fractures or patchy saturation in the mesocopic scale range. These scenarios are highly pertinent because they tend to be associated with very high levels of velocity dispersion and attenuation caused by WIFF in the mesoscopic scale range. The problem of determining the REV size for generic heterogeneous rocks is extremely complex and entirely unexplored in the given context. In this pilot study, we have therefore focused on periodic media, which assures the inherent self- similarity of the considered samples regardless of their size and thus simplifies the problem to a systematic analysis of the dependence of the REV size on the applied boundary conditions in the numerical simulations. Our results demonstrate that boundary condition effects are absent for layered media and negligible in the presence of patchy saturation, thus resulting in minimum REV sizes. Conversely, strong boundary condition effects arise in the presence of a periodic distribution of finite-length fractures, thus leading to large REV sizes. In the third part of the thesis, we propose a novel effective poroelastic model for periodic media characterized by mesoscopic layering, which accounts for WIFF at both the macroscopic and mesoscopic scales as well as for the anisotropy associated with the layering. Correspondingly, this model correctly predicts the existence of the fast and slow P-waves as well as quasi and pure S-waves for any direction of wave propagation as long as the corresponding wavelengths are much larger than the layer thicknesses. The primary motivation for this work is that, for formations of intermediate to high permeability, such as, for example, unconsolidated sediments, clean sandstones, or fractured rocks, these two WIFF mechanisms may prevail at similar frequencies. This scenario, which can be expected rather common, cannot be accounted for by existing models for layered porous media. Comparisons of analytical solutions of the P- and S-wave phase velocities and inverse quality factors for wave propagation perpendicular to the layering with those obtained from numerical simulations based on a ID finite-element solution of the poroelastic equations of motion show very good agreement as long as the assumption of long wavelengths remains valid. A limitation of the proposed model is its inability to account for inertial effects in mesoscopic WIFF when both WIFF mechanisms prevail at similar frequencies. Our results do, however, also indicate that the associated error is likely to be relatively small, as, even at frequencies at which both inertial and scattering effects are expected to be at play, the proposed model provides a solution that is remarkably close to its numerical benchmark. -- Comprendre et pouvoir quantifier la dissipation d'énergie sismique qui se traduit par la dispersion et l'atténuation des vitesses dans les roches poreuses et saturées en fluide est un intérêt primordial pour obtenir des informations à propos des propriétés élastique et hydraulique des roches en question. De plus en plus d'études montrent que le déplacement relatif du fluide par rapport au solide induit par le passage de l'onde (wave induced fluid flow en anglais, dont on gardera ici l'abréviation largement utilisée, WIFF), représente le principal mécanisme physique qui régit ces phénomènes, pour la gamme des fréquences sismiques, sonique et jusqu'à l'ultrasonique. Ce mécanisme, qui prédomine aux échelles microscopique, mésoscopique et macroscopique, est lié à la dissipation d'énergie visqueuse résultant des gradients de pression de fluide et des effets inertiels induits par le passage du champ d'onde. Dans la première partie de cette thèse, nous présentons une analyse de données de diagraphie acoustique à large bande et multifréquences, issues d'un forage réalisé dans des sédiments glaciaux-fluviaux, non-consolidés et saturés en eau. La difficulté inhérente à l'interprétation de l'atténuation et de la dispersion des vitesses des ondes P observées, est que l'importance des WIFF aux différentes échelles est inconnue et difficile à quantifier. Notre étude montre que l'on peut négliger le taux d'atténuation et de dispersion des vitesses dû à la présence d'hétérogénéités à l'échelle mésoscopique dans des sédiments clastiques, non- consolidés et saturés en eau. A l'inverse, les WIFF à l'échelle macroscopique expliquent la plupart des données, tandis que les précisions apportées par les WIFF à l'échelle microscopique sont localement significatives. En utilisant une méthode d'inversion du type Monte-Carlo, nous avons comparé, pour les deux modèles WIFF aux échelles macroscopique et microscopique, leur capacité à contraindre les modules élastiques de la matrice sèche et la perméabilité ainsi que leur distribution de probabilité locale. Dans une seconde partie de cette thèse, nous cherchons une solution pour déterminer la dimension d'un volume élémentaire représentatif (noté VER). Cette problématique se pose dans les procédures numériques de changement d'échelle pour déterminer l'atténuation effective et la dispersion effective de la vitesse sismique dans un milieu hétérogène. Pour ce faire, nous nous concentrons sur un ensemble d'échantillons de roches synthétiques idéalisés incluant des strates, des fissures, ou une saturation partielle à l'échelle mésoscopique. Ces scénarios sont hautement pertinents, car ils sont associés à un taux très élevé d'atténuation et de dispersion des vitesses causé par les WIFF à l'échelle mésoscopique. L'enjeu de déterminer la dimension d'un VER pour une roche hétérogène est très complexe et encore inexploré dans le contexte actuel. Dans cette étude-pilote, nous nous focalisons sur des milieux périodiques, qui assurent l'autosimilarité des échantillons considérés indépendamment de leur taille. Ainsi, nous simplifions le problème à une analyse systématique de la dépendance de la dimension des VER aux conditions aux limites appliquées. Nos résultats indiquent que les effets des conditions aux limites sont absents pour un milieu stratifié, et négligeables pour un milieu à saturation partielle : cela résultant à des dimensions petites des VER. Au contraire, de forts effets des conditions aux limites apparaissent dans les milieux présentant une distribution périodique de fissures de taille finie : cela conduisant à de grandes dimensions des VER. Dans la troisième partie de cette thèse, nous proposons un nouveau modèle poro- élastique effectif, pour les milieux périodiques caractérisés par une stratification mésoscopique, qui prendra en compte les WIFF à la fois aux échelles mésoscopique et macroscopique, ainsi que l'anisotropie associée à ces strates. Ce modèle prédit alors avec exactitude l'existence des ondes P rapides et lentes ainsi que les quasis et pures ondes S, pour toutes les directions de propagation de l'onde, tant que la longueur d'onde correspondante est bien plus grande que l'épaisseur de la strate. L'intérêt principal de ce travail est que, pour les formations à perméabilité moyenne à élevée, comme, par exemple, les sédiments non- consolidés, les grès ou encore les roches fissurées, ces deux mécanismes d'WIFF peuvent avoir lieu à des fréquences similaires. Or, ce scénario, qui est assez commun, n'est pas décrit par les modèles existants pour les milieux poreux stratifiés. Les comparaisons des solutions analytiques des vitesses des ondes P et S et de l'atténuation de la propagation des ondes perpendiculaires à la stratification, avec les solutions obtenues à partir de simulations numériques en éléments finis, fondées sur une solution obtenue en 1D des équations poro- élastiques, montrent un très bon accord, tant que l'hypothèse des grandes longueurs d'onde reste valable. Il y a cependant une limitation de ce modèle qui est liée à son incapacité à prendre en compte les effets inertiels dans les WIFF mésoscopiques quand les deux mécanismes d'WIFF prédominent à des fréquences similaires. Néanmoins, nos résultats montrent aussi que l'erreur associée est relativement faible, même à des fréquences à laquelle sont attendus les deux effets d'inertie et de diffusion, indiquant que le modèle proposé fournit une solution qui est remarquablement proche de sa référence numérique.
Resumo:
This thesis attempts to fill gaps in both a theoretical basis and an operational and strategic understanding in the areas of social ventures, social entrepreneurship and nonprofit business models. This study also attempts to bridge the gap in strategic and economic theory between social and commercial ventures. More specifically, this thesis explores sustainable competitive advantage from a resource-based theory perspective and explores how it may be applied to the nonmarket situation of nonprofit organizations and social ventures. It is proposed that a social value-orientation of sustainable competitive advantage, called sustainable contributive advantage, provides a more realistic depiction of what is necessary in order for a social venture to perform better than its competitors over time. In addition to providing this realistic depiction, this research provides a substantial theoretical contribution in the area of economics, social ventures, and strategy research, specifically in regards to resource-based theory. The proposed model for sustainable contributive advantage uses resource-based theory and competitive advantage in order to be applicable to social ventures. This model proposes an explanation of a social venture’s ability to demonstrate consistently superior performance. In order to determine whether sustainable competitive advantage is in fact, appropriate to apply to both social and economic environments, quantitative analyses are conducted on a large sample of nonprofit organizations in a single industry and then compared to similar quantitative analyses conducted on commercial ventures. In comparing the trends and strategies between the two types of entities from a quantitative perspective, propositions are developed regarding a social venture’s resource utilization strategies and their possible impact on performance. Evidence is found to support the necessity of adjusting existing models in resource-based theory in order to apply them to social ventures. Additionally supported is the proposed theory of sustainable contributive advantage. The thesis concludes with recommendations for practitioners, researchers and policy makers as well as suggestions for future research paths.
Resumo:
Since financial liberalization in the 1980s, non-profit maximizing, stakeholder-oriented banks have outperformed private banks in Europe. This article draws on empirical research, banking theory and theories of the firm to explain this apparent anomaly for neo-liberal policy and contemporary market-based banking theory. The realization of competitive advantages by alternative banks (savings banks, cooperative banks and development banks) has significant implications for conceptions of bank change, regulation and political economy.
Resumo:
ABSTRACT In 1979 Nicaragua, under the Sandinistas, experienced a genuine, socialist, full scale, agrarian revolution. This thesis examines whether Jeffery Paige's theory of agrarian revolutions would have been successful in predicting this revolution and ln predicting non-revolution in the neighboring country of Honduras. The thesis begins by setting Paige's theory in the tradition of radical theories of revolution. It then derives four propositions from Paige's theory which suggest the patterns of export crops, land tenure changes and class configurations which are necessary for an agrarian and socialist revolution. These propositions are tested against evidence from the twentieth century histories of economic, social and political change in Nicaragua and Honduras. The thesis concludes that Paige's theory does help to explain the occurrence of agrarian revolution in Nicaragua and non-revolution in Honduras. A fifth proposition derived from Paige's theory proved less useful in explaining the specific areas within Nicaragua that were most receptive to Sandinista revolutionary activity.
Resumo:
La formation des sociétés fondées sur la connaissance, le progrès de la technologie de communications et un meilleur échange d'informations au niveau mondial permet une meilleure utilisation des connaissances produites lors des décisions prises dans le système de santé. Dans des pays en voie de développement, quelques études sont menées sur des obstacles qui empêchent la prise des décisions fondées sur des preuves (PDFDP) alors que des études similaires dans le monde développé sont vraiment rares. L'Iran est le pays qui a connu la plus forte croissance dans les publications scientifiques au cours de ces dernières années, mais la question qui se pose est la suivante : quels sont les obstacles qui empêchent l'utilisation de ces connaissances de même que celle des données mondiales? Cette étude embrasse trois articles consécutifs. Le but du premier article a été de trouver un modèle pour évaluer l'état de l'utilisation des connaissances dans ces circonstances en Iran à l’aide d'un examen vaste et systématique des sources suivie par une étude qualitative basée sur la méthode de la Grounded Theory. Ensuite au cours du deuxième et troisième article, les obstacles aux décisions fondées sur des preuves en Iran, sont étudiés en interrogeant les directeurs, les décideurs du secteur de la santé et les chercheurs qui travaillent à produire des preuves scientifiques pour la PDFDP en Iran. Après avoir examiné les modèles disponibles existants et la réalisation d'une étude qualitative, le premier article est sorti sous le titre de «Conception d'un modèle d'application des connaissances». Ce premier article sert de cadre pour les deux autres articles qui évaluent les obstacles à «pull» et «push» pour des PDFDP dans le pays. En Iran, en tant que pays en développement, les problèmes se situent dans toutes les étapes du processus de production, de partage et d’utilisation de la preuve dans la prise de décision du système de santé. Les obstacles qui existent à la prise de décision fondée sur des preuves sont divers et cela aux différents niveaux; les solutions multi-dimensionnelles sont nécessaires pour renforcer l'impact de preuves scientifiques sur les prises de décision. Ces solutions devraient entraîner des changements dans la culture et le milieu de la prise de décision afin de valoriser la prise de décisions fondées sur des preuves. Les critères de sélection des gestionnaires et leur nomination inappropriée ainsi que leurs remplaçants rapides et les différences de paiement dans les secteurs public et privé peuvent affaiblir la PDFDP de deux façons : d’une part en influant sur la motivation des décideurs et d'autre part en détruisant la continuité du programme. De même, tandis que la sélection et le remplacement des chercheurs n'est pas comme ceux des gestionnaires, il n'y a aucun critère pour encourager ces deux groupes à soutenir le processus décisionnel fondés sur des preuves dans le secteur de la santé et les changements ultérieurs. La sélection et la promotion des décideurs politiques devraient être basées sur leur performance en matière de la PDFDP et les efforts des universitaires doivent être comptés lors de leurs promotions personnelles et celles du rang de leur institution. Les attitudes et les capacités des décideurs et des chercheurs devraient être encouragés en leur donnant assez de pouvoir et d’habiliter dans les différentes étapes du cycle de décision. Cette étude a révélé que les gestionnaires n'ont pas suffisamment accès à la fois aux preuves nationales et internationales. Réduire l’écart qui sépare les chercheurs des décideurs est une étape cruciale qui doit être réalisée en favorisant la communication réciproque. Cette question est très importante étant donné que l'utilisation des connaissances ne peut être renforcée que par l'étroite collaboration entre les décideurs politiques et le secteur de la recherche. Dans ce but des programmes à long terme doivent être conçus ; la création des réseaux de chercheurs et de décideurs pour le choix du sujet de recherche, le classement des priorités, et le fait de renforcer la confiance réciproque entre les chercheurs et les décideurs politiques semblent être efficace.
Resumo:
This article argues for a new theoretical paradigm for the analysis of change in educational institutions that is able to deal with such issues as readiness for change, transformational change and the failure of change strategies. Punctuated equilibrium (Tushman and Romanelli, 1985) is a theory which has wide application. It envisages long-term change as being made up of a succession of long periods of relative stability interspersed by brief periods of rapid profound change. In the periods of stability only relatively small incremental changes are possible. The periods of transformational change may be triggered by external or internal influences. A recent study of the long-term process of internationalisation in higher education institutions shows evidence to support the theory: long periods of incremental change, events precipitating profound change and the failure of externally imposed attempts to change. Also, as the theory predicts, changes in collegial organisations are slower and more uncertain than changes in managed organisations.
Resumo:
This paper looks at the determinants of school selection in rural Bangladesh, focusing on the choice between registered Islamic and non-religious schools. Using a unique dataset on secondary school-age children from rural Bangladesh, we find that madrasah enrolment falls as household income increases. At the same time, more religious households, and those that live further away from a non-religious school are more likely to send their children to madrasahs. However, in contrast to the theory, we find that Islamic school demand does not respond to the average quality of schools in the locality.
Resumo:
We present evidence that large-scale spatial coherence of 40 Hz oscillations can emerge dynamically in a cortical mean field theory. The simulated synchronization time scale is about 150 ms, which compares well with experimental data on large-scale integration during cognitive tasks. The same model has previously provided consistent descriptions of the human EEG at rest, with tranquilizers, under anesthesia, and during anesthetic-induced epileptic seizures. The emergence of coherent gamma band activity is brought about by changing just one physiological parameter until cortex becomes marginally unstable for a small range of wavelengths. This suggests for future study a model of dynamic computation at the edge of cortical stability.