53 resultados para Aggressive incidents inside a Montreal barroom involving patrons
em CentAUR: Central Archive University of Reading - UK
Resumo:
This essay appears in the first book to examine feminist curatorship in the last 40 years. It undertakes an extended reading of Cathy de Zegher's influential exhibition, Inside the Visible, An Elliptical Traverse of 20th Century Art. In, of and From the Feminine (1995) which proposed that modern art should be understood through cyclical shifts involving the constant reinvention of artistic method and identified four key moments in 20th century history to structure its project. The essay analyses Inside the Visible's concept of an elliptical traverse to raise questions about repetitions and recurrences in feminist exhibitions of the early 1980s, the mid 1990s and 2007 asking whether and in what ways questions of feminist curating have been continuously repeated and reinvented. The essay argues that Inside the Visible was a key project in second wave feminism and exemplified debates about women's time, first theorised by Julia Kristeva. It concludes, however, that 'women's time' has had its moment, and new conceptions of feminism and its history are needed if feminist curating is not endlessly to recycle its past. The essay informs a wider collaborative project on the sexual politics of violence, feminism and contemporary art, in collaboration with Edinburgh and one of the editors of this collection.
Resumo:
This comparative inquiry examines the multi-/bilingual nature and cultural diversity of two distinctly different linguistic and ethnic communities in Montreal – English speakers and Chinese speakers – with a focus on the multi/bilingual and multi/biliterate development of children from these two communities who attend French-language schools, by choice in one case and by law in the other. In both of these communities, children traditionally achieve academic success. The authors approach this investigation from the perspective of the parents’ aspirations and expectations for, and their support of and involvement in, their children’s education. These two communities share key similarities and differences that, when considered together, help to clarify a number of issues involving multi/biliteracy development, socio-economic and linguistic capital, minority/majority language status, mother-tongue support, home–school continuities, and linguistic identity.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
This article reflects on key methodological issues emerging from children and young people's involvement in data analysis processes. We outline a pragmatic framework illustrating different approaches to engaging children, using two case studies of children's experiences of participating in data analysis. The article highlights methods of engagement and important issues such as the balance of power between adults and children, training, support, ethical considerations, time and resources. We argue that involving children in data analysis processes can have several benefits, including enabling a greater understanding of children's perspectives and helping to prioritise children's agendas in policy and practice. (C) 2007 The Author(s). Journal compilation (C) 2007 National Children's Bureau.
Resumo:
Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.
Resumo:
On 17 August 2007, the center of Hurricane Dean passed within 92 km of the mountainous island of Dominica in the West Indies. Despite its distance from the island and its category 1–2 state, Dean brought significant total precipitation exceeding 500 mm and caused numerous landslides. Four rain gauges, a Moderate Resolution Imaging Spectroradiometer (MODIS) image, and 5-min radar scans from Guadeloupe and Martinique are used to determine the storm’s structure and the mountains’ effect on precipitation. The encounter is best described in three phases: (i) an east-northeast dry flow with three isolated drifting cells; (ii) a brief passage of the narrow outer rainband; and (iii) an extended period with south-southeast airflow in a nearly stationary spiral rainband. In this final phase, from 1100 to 2400 UTC, heavy rainfall from the stationary rainband was doubled by orographic enhancement. This enhancement pushed the sloping soils past the landslide threshold. The enhancement was caused by a modified seeder–feeder accretion mechanism that created a “dipole” pattern of precipitation, including a dry zone over the ocean in the lee. In contrast to normal trade-wind conditions, no terrain triggering of convection was identified in the hurricane environment.
Resumo:
The ultimate criterion of success for interactive expert systems is that they will be used, and used to effect, by individuals other than the system developers. A key ingredient of success in most systems is involving users in the specification and development of systems as they are being built. However, until recently, system designers have paid little attention to ascertaining user needs and to developing systems with corresponding functionality and appropriate interfaces to match those requirements. Although the situation is beginning to change, many developers do not know how to go about involving users, or else tackle the problem in an inadequate way. This paper discusses the need for user involvement and considers why many developers are still not involving users in an optimal way. It looks at the different ways in which users can be involved in the development process and describes how to select appropriate techniques and methods for studying users. Finally, it discusses some of the problems inherent in involving users in expert system development, and recommends an approach which incorporates both ethnographic analysis and formal user testing.
Resumo:
Prior to recent legislative changes, sexual offences were contained in a combination of statutory provisions and common law that was criticized as being ill-equipped to tackle the intricacies of modern sexual (mis)behaviour. This pilot study explored the capacity of these provisions to address the complexities of drug-assisted rape using focus groups and a trial simulation to identify factors which influenced jurors in rape trials involving intoxicants. The findings revealed that jurors considered numerous extra-legal factors when reaching a decision: rape myths, misconceptions about the impact of intoxicants and factors such as the motivation of the defendant in administering an intoxicant. This paper draws upon these findings, focusing in particular on the interaction between juror attributions of blame and stereotypical conceptions about intoxication, sexual consent and drug-assisted rape. The findings of this pilot study form the basis for a larger-scale project (ESRC -funded, commenced January 2004) that examines this interaction in the context of new provisions under the Sexual Offences Act 2003.
Resumo:
A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.