37 resultados para Multiplicity Surplus
Resumo:
Hypothetical contingent valuation surveys used to elicit values for environmental and other public goods often employ variants of the referendum mechanism due to the cognitive simplicity and familiarity of respondents with this voting format. One variant, the double referendum mechanism, requires respondents to state twice how they would vote for a given policy proposal given their cost of the good. Data from these surveys often exhibit anomalies inconsistent with standard economic models of consumer preferences. There are a number of published explanations for these anomalies, mostly focusing on problems with the second vote. This article investigates which aspects of the hypothetical task affect the degree of nondemand revelation and takes an individual-based approach to identifying people most likely to non-demand reveal. A clear profile emerges from our model of a person who faces a negative surplus i.e. a net loss in the second vote and invokes non self-interested, non financial motivations during the decision process.
Resumo:
The aim of the Agreement and devolution in Northern Ireland is to draw together atavistic political groups in order to promote a consociational accord which upholds minority rights and cultural demands. However, it is important to understand that disagreements between the pro-British and pro-Irish populations remain and that devolution has a multiplicity of political and cultural meanings. Indeed, determining the incapacity of Northern Irish society to shift towards pluralist and less culturally subjective categorizations of belonging and political devotion remains crucially importance. This article argues that devolution is a first, although as yet unclear, step toward a range of future constitutional changes.
Resumo:
We have a developed a multiple-radical model of the chemical modification reactions involving oxygen and thiols relevant to the interactions of ionizing radiations with DNA. The treatment is based on the Alper and Howard-Flanders equation but considers the case where more than one radical may be involved in the production of lesions in DNA. This model makes several predictions regarding the induction of double strand breaks in DNA by ionizing radiation and the role of sensitizers such as oxygen and protectors such as thiols which act at the chemical phase of radiation action via the involvement of free radicals. The model predicts a decreasing OER with increasing LET on the basis that as radical multiplicity increases so will the probability that, even under hypoxia, damage will be fixed and lead to lesion production. The model can be considered to provide an alternative hypothesis to those of 'interacting radicals' or of 'oxygen-in-the-track'.
Resumo:
In studies of radiation-induced DNA fragmentation and repair, analytical models may provide rapid and easy-to-use methods to test simple hypotheses regarding the breakage and rejoining mechanisms involved. The random breakage model, according to which lesions are distributed uniformly and independently of each other along the DNA, has been the model most used to describe spatial distribution of radiation-induced DNA damage. Recently several mechanistic approaches have been proposed that model clustered damage to DNA. In general, such approaches focus on the study of initial radiation-induced DNA damage and repair, without considering the effects of additional (unwanted and unavoidable) fragmentation that may take place during the experimental procedures. While most approaches, including measurement of total DNA mass below a specified value, allow for the occurrence of background experimental damage by means of simple subtractive procedures, a more detailed analysis of DNA fragmentation necessitates a more accurate treatment. We have developed a new, relatively simple model of DNA breakage and the resulting rejoining kinetics of broken fragments. Initial radiation-induced DNA damage is simulated using a clustered breakage approach, with three free parameters: the number of independently located clusters, each containing several DNA double-strand breaks (DSBs), the average number of DSBs within a cluster (multiplicity of the cluster), and the maximum allowed radius within which DSBs belonging to the same cluster are distributed. Random breakage is simulated as a special case of the DSB clustering procedure. When the model is applied to the analysis of DNA fragmentation as measured with pulsed-field gel electrophoresis (PFGE), the hypothesis that DSBs in proximity rejoin at a different rate from that of sparse isolated breaks can be tested, since the kinetics of rejoining of fragments of varying size may be followed by means of computer simulations. The problem of how to account for background damage from experimental handling is also carefully considered. We have shown that the conventional procedure of subtracting the background damage from the experimental data may lead to erroneous conclusions during the analysis of both initial fragmentation and DSB rejoining. Despite its relative simplicity, the method presented allows both the quantitative and qualitative description of radiation-induced DNA fragmentation and subsequent rejoining of double-stranded DNA fragments. (C) 2004 by Radiation Research Society.
Resumo:
Porcine circovirus type 2 (PCV2) is the essential infectious agent of post-weaning multisystemic wasting syndrome (PMWS), one of the most important diseases of swine. Although several studies have described different biological properties of the virus, some aspects of its replication cycle, including ultrastructural alterations, remain unknown. The aim of the present study was to describe for the first time a complete morphogenesis study of PCV2 in a clone of the lymphoblastoid L35 cell line at the ultrastructural level using electron microscopy techniques. Cells were infected with PCV2 at a multiplicity of infection of 10 and examined at 0, 6, 12, 24, 48, 60 and 72 h post-infection. PCV2 was internalized by endocytosis, after which the virus aggregated in intracytoplasmic inclusion bodies (ICIs). Subsequently, PCV2 was closely associated with mitochondria, completing a first cytoplasmic phase. The virus entered the nucleus for replication and virus assembly and encapsidation occurred with the participation of the nuclear membrane. Immature virions left the nucleus and formed ICIs in a second cytoplasmic phase. The results suggest that at the end of the replication cycle (between 24 and 48 h), PCV2 was released either by budding of mature virion clusters or by lysis of apoptotic or dead cells. In conclusion, the L35-derived clone represents a suitable in-vitro model for PCV2 morphogenesis studies and characterization of the PCV2 replication cycle. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Chronic myelomonocytic leukaemia (CMML) is a heterogeneous haematopoietic disorder characterized by myeloproliferative or myelodysplastic features. At present, the pathogenesis of this malignancy is not completely understood. In this study, we sought to analyse gene expression profiles of CMML in order to characterize new molecular outcome predictors. A learning set of 32 untreated CMML patients at diagnosis was available for TaqMan low-density array gene expression analysis. From 93 selected genes related to cancer and cell cycle, we built a five-gene prognostic index after multiplicity correction. Using this index, we characterized two categories of patients with distinct overall survival (94% vs. 19% for good and poor overall survival, respectively; P = 0.007) and we successfully validated its strength on an independent cohort of 21 CMML patients with Affymetrix gene expression data. We found no specific patterns of association with traditional prognostic stratification parameters in the learning cohort. However, the poor survival group strongly correlated with high-risk treated patients and transformation to acute myeloid leukaemia. We report here a new multigene prognostic index for CMML, independent of the gene expression measurement method, which could be used as a powerful tool to predict clinical outcome and help physicians to evaluate criteria for treatments.
Resumo:
Although cartel behaviour is almost universally (and rightly) condemned, it is not clear why cartel participants deserve the full wrath of the criminal law and its associated punishment. To fill this void, I develop a normative (or principled) justification for the criminalisation of conduct characteristic of ‘hard core’ cartels. The paper opens with a brief consideration of the rhetoric commonly used to denounce cartel activity, eg that it ‘steals from’ or ‘robs’ consumers. To put the discussion in context, a brief definition of ‘hard core’ cartel behaviour is provided and the harms associated with this activity are identified. These are: welfare losses in the form of appropriation (from consumer to producer) of consumer surplus, the creation of deadweight loss to the economy, the creation of productive inefficiency (hindering innovation of both products and processes), and the creation of so-called X-inefficiency. As not all activities which cause harm ought to be criminalised, a theory as to why certain harms in a liberal society can be criminalised is developed. It is based on JS Mill's harm to others principle (as refined by Feinberg) and on a choice of social institutions using Rawls's ‘veil of ignorance.’ The theory is centred on the value of individual choice in securing one's own well-being, with the market as an indispensable instrument for this. But as applied to the harm associated with cartel conduct, this theory shows that none of the earlier mentioned problems associated with this activity provide sufficient justification for criminalisation. However, as the harm from hard core cartel activity strikes at an important institution which permits an individual's ability to secure their own well-being in a liberal society, criminalisation of hard core cartel behaviour can have its normative justification on this basis.
Resumo:
The increasing need to understand complex products and systems with long life spans, presents a significant challenge to designers who increasingly require a broader understanding of the operational aspects of the system. This demands an evolution in current design practice, as designers are often constrained to provide a subsystem solution without full knowledge of the global system operation. Recently there has been a push to consider value centric approaches which should facilitate better or more rapid convergence to design solutions with predictable completion schedules. Value Driven Design is one such approach, in which value is used as the system top level objective function. This provides a broader view of the system and enables all sub-systems and components to be designed with a view to the effect on project value. It also has the capacity to include value expressions for more qualitative aspects, such as environmental impact. However, application of the method to date has been restricted to comparing value in a programme where the lifespan is fixed and known a priori. This paper takes a novel view of value driven design through the surplus value objective function, and shows how it can be used to identify key sensitivities to guide designers in design trade-off decisions. By considering a new time based approach it can be used to identify optimum programme life-span and hence allow trade-offs over the whole product life.
Resumo:
Classical radiation biology research has centred on nuclear DNA as the main target of radiation-induced damage. Over the past two decades, this has been challenged by a significant amount of scientific evidence clearly showing radiation-induced cell signalling effects to have important roles in mediating overall radiobiological response. These effects, generally termed radiation-induced bystander effects (RIBEs) have challenged the traditional DNA targeted theory in radiation biology and highlighted an important role for cells not directly traversed by radiation. The multiplicity of experimental systems and exposure conditions in which RIBEs have been observed has hindered precise definitions of these effects. However, RIBEs have recently been classified for different relevant human radiation exposure scenarios in an attempt to clarify their role in vivo. Despite significant research efforts in this area, there is little direct evidence for their role in clinically relevant exposure scenarios. In this review, we explore the clinical relevance of RIBEs from classical experimental approaches through to novel models that have been used to further determine their potential implications in the clinic.
Resumo:
In recent years, the US Supreme Court has rather controversially extended the ambit of the Federal Arbitration Act to extend arbitration’s reach into, inter alia¸ consumer matters, with the consequence that consumers are often (and unbeknownst to them) denied remedies which would otherwise be available. Such denied remedies include recourse to class action proceedings, effective denial of punitive damages, access to discovery and the ability to resolve the matter in a convenient forum.
The court’s extension of arbitration’s ambit is controversial. Attempts to overturn this extension have been made in Congress, but to no avail. In contrast to American law, European consumer law looks at pre-dispute agreements to arbitrate directed at consumers with extreme suspicion, and does so on the grounds of fairness. In contrast, some argue that pre-dispute agreements in consumer (and employment) matters are consumer welfare enhancing: they decrease the costs of doing business, which is then passed on to the consumer. This Article examines these latter claims from both an economic and normative perspective.
The economic analysis of these arguments shows that their assumptions do not hold. Rather than being productive of consumer surplus, the use of arbitration is likely to have the opposite effect. The industries from which the recent Supreme Court cases originated not only do not exhibit the industrial structure assumed by the proponents of expanded arbitration, but are also industries which exhibit features that facilitate consumer welfare reducing collusion.
The normative analysis addresses the fairness concerns. It is explicitly based upon John Rawls’ notion of “justice as fairness,” which can provide a lens to evaluate social institutions. This Rawlsian analysis considers the use of extended arbitration in consumer matters in the light of the earlier economic results. It suggests that the asymmetries present in the contractual allocation of rights serve as prima facie evidence that such arbitration–induced exclusions are prima facie unjust/unfair. However, as asymmetry is only a prima facie test, a generalized criticism of the arbitration exclusions (of the sort found in Congress and underlying the European regime) is overbroad.
Resumo:
This paper studies disinflationary shocks in a non-linear New Keynesian model with search and matching frictions and moral hazard in the labor markets. Our focus is on understanding the wage formation process as well as welfare costs of disinflations in the presence of such labor market frictions.
The presence of imperfect information in labor markets imposes a lower bound on worker surplus that varies endogenously. Consequently equilibrium can take two forms depending on whether the no shirking condition is binding or not. We also evaluate both regimes from a welfare perspective when the economy is subject to a perfectly credible disinflationary shock.
Resumo:
African coastal regions are expected to experience the highest rates of population growth in coming decades. Fresh groundwater resources in the coastal zone of East Africa (EA) are highly vulnerable to seawater intrusion. Increasing water demand is leading to unsustainable and ill-planned well drilling and abstraction. Wells supplying domestic, industrial and agricultural needs are or have become, in many areas, too saline for use. Climate change, including weather changes and sea level rise, is expected to exacerbate this problem. The multiplicity of physical, demographic and socio-economic driving factors makes this a very challenging issue for management. At present the state and probable evolution of coastal aquifers in EA are not well documented. The UPGro project 'Towards groundwater security in coastal East Africa' brings together teams from Kenya, Tanzania, Comoros Islands and Europe to address this knowledge gap. An integrative multidisciplinary approach, combining the expertise of hydrogeologists, hydrologists and social scientists, is investigating selected sites along the coastal zone in each country. Hydrogeologic observatories have been established in different geologic and climatic settings representative of the coastal EA region, where focussed research will identify the current status of groundwater and identify future threats based on projected demographic and climate change scenarios. Researchers are also engaging with end users as well as local community and stakeholder groups in each area in order to understanding the issues most affecting the communities and searching sustainable strategies for addressing these.
Resumo:
For a multiplicity of socio-economic, geo-political, strategic and identity-based reasons, Turkey’s progress towards EU membership is often treated as a sui generis case. Yet although Turkey’s accession negotiations with the European Union (EU) are essentially a bilateral – and often stormy – affair, they take place within a wider and dynamic process of enlargement in which not only can the gloomy – sometimes dark – shadows of past and prospective enlargements be clearly detected, but so too can the often chill winds from ongoing, parallel negotiations with other candidates. How the EU negotiates accession and what it expects from candidates has continued to evolve since the EU began drawing up its framework for negotiations with Turkey ten years ago. This paper charts this evolution by first identifying changes in the light of Croatia’s negotiating experience, the ‘lessons learnt’ by the EU in meeting the challenges of Bulgarian and Romanian accession, the EU’s handling of Iceland’s membership bid and accession negotiations, and the revised approach to negotiating accession evident in the more recent frameworks for accession negotiations with Montenegro and Serbia. The paper then explores the extent to which these changes have impacted on the approach the EU has adopted in framing and progressing accession negotiations with Turkey. In doing so, it questions both the consistency with which the EU’s negotiates accession and the extent to which Turkey’s progress towards EU membership is conditioned by the broader dynamics of EU enlargement as opposed to simply the dynamics within EU-Turkey relations and domestic Turkish reform efforts.
Resumo:
This paper is a reexamination of the concept of the geopolitical border through a critical analysis of prevalent conceptualizations of borders, as they are articulated in the fields of geopolitics, political theory and international relations. Suggesting that thinking of borders as the derivative of territorial definitions disregards the dependency of territoriality and sovereign space on the praxes of border making, this paper offers an analytic distinction between normative articulations of borders and the border as a political practice. This distinction enables the identification of partial and incoherent border making processes. Consequently, the creation of borders can be analyzed as an effect of a multiplicity of performative praxes, material, juridical and otherwise discursive, that operate in relation to the management of space and attribute it with geopolitical distinctions. Furthermore, the paper suggests that these praxes, which appear in dispersed sites and in a wide variety of loci, are intrinsically linked to different spatial practices of population management, of governmentality. Thus, I offer a reading of borders as a praxis which manages binary differentiations of matrixes of governmentality, which create schisms in the population as a totality, through the deployment of the evocation of sovereignty as the legitimizing source of this differentiation or for the means necessary for its sustainment.
Resumo:
Film, History and Memory examines the relationship between film and history, exploring the multiplicity of ways in which films depict, contest, reinforce or subvert historical understanding. This volume broadens the focus from 'history', the study of past events, to 'memory', the processes – individual, generational, collective or state-driven – by which meanings are attached to the past. This approach acknowledges how the significance of the historical film lies less in its empirical qualities than in its powerful capacity to influence public thinking and discourses about the past, whether by shaping collective memory, popular history and social memory, or by retrieving suppressed or marginalized histories. This study aims to contribute to the growing literature on history and film through the breadth of its approach, both in disciplinary and geographical terms. Contributors are drawn not only from the discipline of history but also film studies, film practice, art history, languages and literature, and cultural studies.