29 resultados para judge executor
Resumo:
This paper describes experiments relating to the perception of the roughness of simulated surfaces via the haptic and visual senses. Subjects used a magnitude estimation technique to judge the roughness of “virtual gratings” presented via a PHANToM haptic interface device, and a standard visual display unit. It was shown that under haptic perception, subjects tended to perceive roughness as decreasing with increased grating period, though this relationship was not always statistically significant. Under visual exploration, the exact relationship between spatial period and perceived roughness was less well defined, though linear regressions provided a reliable approximation to individual subjects’ estimates.
Resumo:
We explicitly tested for the first time the ‘environmental specificity’ of traditional 16S rRNAtargeted fluorescence in situ hybridization (FISH) through comparison of the bacterial diversity actually targeted in the environment with the diversity that should be exactly targeted (i.e. without mismatches) according to in silico analysis. To do this, we exploited advances in modern Flow Cytometry that enabled improved detection and therefore sorting of sub-micron-sized particles and used probe PSE1284 (designed to target Pseudomonads) applied to Lolium perenne rhizosphere soil as our test system. The 6-carboxyfluorescein (6-FAM)-PSE1284-hybridised population, defined as displaying enhanced green fluorescence in Flow Cytometry, represented 3.51±1.28% of the total detected population when corrected using a nonsense (NON-EUB338) probe control. Analysis of 16S rRNA gene libraries constructed from Fluorescence Activated Cell Sorted (FACS) -recovered fluorescent populations (n=3), revealed that 98.5% (Pseudomonas spp. comprised 68.7% and Burkholderia spp. 29.8%) of the total sorted population was specifically targeted as evidenced by the homology of the 16S rRNA sequences to the probe sequence. In silico evaluation of probe PSE1284 with the use of RDP-10 probeMatch justified the existence of Burkholderia spp. among the sorted cells. The lack of novelty in Pseudomonas spp. sequences uncovered was notable, probably reflecting the well-studied nature of this functionally important genus. To judge the diversity recorded within the FACS-sorted population, rarefaction and DGGE analysis were used to evaluate, respectively, the proportion of Pseudomonas diversity uncovered by the sequencing effort and the representativeness of the Nycodenz® method for the extraction of bacterial cells from soil.
Resumo:
The family of theories dubbed ‘luck egalitarianism’ represent an attempt to infuse egalitarian thinking with a concern for personal responsibility, arguing that inequalities are just when they result from, or the extent to which they result from, choice, but are unjust when they result from, or the extent to which they result from, luck. In this essay I argue that luck egalitarians should sometimes seek to limit inequalities, even when they have a fully choice-based pedigree (i.e., result only from the choices of agents). I grant that the broad approach is correct but argue that the temporal standpoint from which we judge whether the person can be held responsible, or the extent to which they can be held responsible, should be radically altered. Instead of asking, as Standard (or Static) Luck Egalitarianism seems to, whether or not, or to what extent, a person was responsible for the choice at the time of choosing, and asking the question of responsibility only once, we should ask whether, or to what extent, they are responsible for the choice at the point at which we are seeking to discover whether, or to what extent, the inequality is just, and so the question of responsibility is not settled but constantly under review. Such an approach will differ from Standard Luck Egalitarianism only if responsibility for a choice is not set in stone – if responsibility can weaken then we should not see the boundary between luck and responsibility within a particular action as static. Drawing on Derek Parfit’s illuminating discussions of personal identity, and contemporary literature on moral responsibility, I suggest there are good reasons to think that responsibility can weaken – that we are not necessarily fully responsible for a choice for ever, even if we were fully responsible at the time of choosing. I call the variant of luck egalitarianism that recognises this shift in temporal standpoint and that responsibility can weaken Dynamic Luck Egalitarianism (DLE). In conclusion I offer a preliminary discussion of what kind of policies DLE would support.
Resumo:
This paper examines cyclical behaviour in commercial property values over the period 1956 to 1996, using a structural times series (unobserved components) approach. The influence of the transition to short rent reviews during the late 1960s and the short and long-term impacts of the 1974 and 1990 property crashes are also incorporated into the analysis, via dummy variables. It is found that once these variables are taken into account a fairly regular cyclical pattern can be discerned, with a period of about 7.8 years. Furthermore, the 1974 and 1990 property crashes are shown to have had a major long-term impact on property value growth (presumably via their influence on investors' expectations).
Resumo:
High-resolution ensemble simulations (Δx = 1 km) are performed with the Met Office Unified Model for the Boscastle (Cornwall, UK) flash-flooding event of 16 August 2004. Forecast uncertainties arising from imperfections in the forecast model are analysed by comparing the simulation results produced by two types of perturbation strategy. Motivated by the meteorology of the event, one type of perturbation alters relevant physics choices or parameter settings in the model's parametrization schemes. The other type of perturbation is designed to account for representativity error in the boundary-layer parametrization. It makes direct changes to the model state and provides a lower bound against which to judge the spread produced by other uncertainties. The Boscastle has genuine skill at scales of approximately 60 km and an ensemble spread which can be estimated to within ∼ 10% with only eight members. Differences between the model-state perturbation and physics modification strategies are discussed, the former being more important for triggering and the latter for subsequent cell development, including the average internal structure of convective cells. Despite such differences, the spread in rainfall evaluated at skilful scales is shown to be only weakly sensitive to the perturbation strategy. This suggests that relatively simple strategies for treating model uncertainty may be sufficient for practical, convective-scale ensemble forecasting.
Resumo:
Some proponents of local knowledge, such as Sillitoe (2010), have expressed second thoughts about its capacity to effect development on the ‘revolutionary’ scale once predicted. Our argument in this article follows a similar route. Recent research into the management of livestock in South Africa makes clear that rural African livestock farmers experience uncertainty in relation to the control of stock diseases. State provision of veterinary services has been significantly reduced over the past decade. Both white and African livestock owners are to a greater extent left to their own devices. In some areas of animal disease management, African livestock owners have recourse to tried-and-tested local remedies, which are largely plant-based. But especially in the critical sphere of tick control, efficacious treatments are less evident, and livestock owners struggle to find adequate solutions to high tickloads. This is particularly important in South Africa in the early twenty-first century because land reform and the freedom to purchase land in the post-apartheid context affords African stockowners opportunities to expand livestock holdings. Our research suggests that the limits of local knowledge in dealing with ticks is one of the central problems faced by African livestock owners. We judge this not only in relation to efficacy but also the perceptions of livestock owners themselves. While confidence and practice varies, and there is increasing resort of chemical acaricides we were struck by the uncertainty of livestock owners over the best strategies.
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.
Resumo:
In Hobbesian terminology, ‘unwritten laws’ are natural laws enforced within a polity, by a non-sovereign judge, without some previous public promulgation. This article discusses the idea in the light of successive Hobbesian accounts of ‘law’ and ‘obligation’. Between De Cive and Leviathan, Hobbes dropped the idea that natural law is strictly speaking law, but he continued to believe unwritten laws must form a part of any legal system. He was unable to explain how such a law could claim a legal status. His loyalty to the notion, in spite of all the trouble that it caused, is a sign of his belief that moral knowledge is readily accessible to all.
Resumo:
Garfield produces a critique of neo-minimalist art practice by demonstrating how the artist Melanie Jackson’s Some things you are not allowed to send around the world (2003 and 2006) and the experimental film-maker Vivienne Dick’s Liberty’s booty (1980) – neither of which can be said to be about feeling ‘at home’ in the world, be it as a resident or as a nomad – examine global humanity through multi-positionality, excess and contingency, and thereby begin to articulate a new cosmopolitan relationship with the local – or, rather, with many different localities – in one and the same maximalist sweep of the work. ‘Maximalism’ in Garfield’s coinage signifies an excessive overloading (through editing, collage, and the sheer density of the range of the material) that enables the viewer to insert themselves into the narrative of the work. In the art of both Jackson and Dick Garfield detects a refusal to know or to judge the world; instead, there is an attempt to incorporate the complexities of its full range into the singular vision of the work, challenging the viewer to identify what is at stake.
Resumo:
This book advances a fresh philosophical account of the relationship between the legislature and courts, opposing the common conception of law, in which it is legislatures that primarily create the law, and courts that primarily apply it. This conception has eclectic affinities with legal positivism, and although it may have been a helpful intellectual tool in the past, it now increasingly generates more problems than it solves. For this reason, the author argues, legal philosophers are better off abandoning it. At the same time they are asked to dismantle the philosophical and doctrinal infrastructure that has been based on it and which has been hitherto largely unquestioned. In its place the book offers an alternative framework for understanding the role of courts and the legislature; a framework which is distinctly anti-positivist and which builds on Ronald Dworkin’s interpretive theory of law. But, contrary to Dworkin, it insists that legal duty is sensitive to the position one occupies in the project of governing; legal interpretation is not the solitary task of one super-judge, but a collaborative task structured by principles of institutional morality such as separation of powers which impose a moral duty on participants to respect each other's contributions. Moreover this collaborative task will often involve citizens taking an active role in their interaction with the law.
Resumo:
In this paper the authors consider natural, feigned or absence of emotions in text-based dialogues. The dialogues occurred during interactions between human Judges/Interrogators and hidden entities in practical Turing tests implemented at Bletchley Park in June 2012. The authors focus on the interactions that left the Interrogator unable to say whether they were talking to a human or a machine after five minutes of questioning; the hidden interlocutor received an ‘unsure’ classification. In cases where the Judge has provided post-event feedback the authors present their rationale from three viva voce one-to-one Turing tests. The authors find that emoticons and other visual devices used to express feelings in text-based interaction were missing in the conversations between the Interrogators and hidden interlocutors.
Resumo:
One route to understanding the thoughts and feelings of others is by mentally putting one's self in their shoes and seeing the world from their perspective, i.e., by simulation. Simulation is potentially used not only for inferring how others feel, but also for predicting how we ourselves will feel in the future. For instance, one might judge the worth of a future reward by simulating how much it will eventually be enjoyed. In intertemporal choices between smaller immediate and larger delayed rewards, it is observed that as the length of delay increases, delayed rewards lose subjective value; a phenomenon known as temporal discounting. In this article, we develop a theoretical framework for the proposition that simulation mechanisms involved in empathizing with others also underlie intertemporal choices. This framework yields a testable psychological account of temporal discounting based on simulation. Such an account, if experimentally validated, could have important implications for how simulation mechanisms are investigated, and makes predictions about special populations characterized by putative deficits in simulating others.
Resumo:
Social domains are classes of interpersonal processes each with distinct procedural rules underpinning mutual understanding, emotion regulation and action. We describe the features of three domains of family life – safety, attachment and discipline/expectation – and contrast them with exploratory processes in terms of the emotions expressed, the role of certainty versus uncertainty, and the degree of hierarchy in an interaction. We argue that everything that people say and do in family life carries information about the type of interaction they are engaged in – that is, the domain. However, sometimes what they say or how they behave does not make the domain clear, or participants in the social interactions are not in the same domain (there is a domain mismatch). This may result in misunderstandings, irresolvable arguments or distress. We describe how it is possible to identify domains and judge whether they are clear and unclear, and matched and mismatched, in observed family interactions and in accounts of family processes. This then provides a focus for treatment and helps to define criteria for evaluating outcomes.
Resumo:
In the past decade, a number of empirical researchers have suggested that laypeople have compatibilist intuitions. In a recent paper, Feltz and Millan (2015) have challenged this conclusion by claiming that most laypeople are only compatibilists in appearance and are in fact willing to attribute free will to people no matter what. As evidence for this claim, they have shown that an important proportion of laypeople still attribute free will to agents in fatalistic universes. In this paper, we first argue that Feltz and Millan’s error-theory rests on a conceptual confusion: it is perfectly acceptable for a certain brand of compatibilist to judge free will and fatalism to be compatible, as long as fatalism does not prevent agents from being the source of their actions. We then present the results of two studies showing that laypeople’s intuitions are best understood as following a certain brand of source compatibilism rather than a “free-will-no-matter-what” strategy.