95 resultados para Bernard Alsop
Resumo:
Populations of the Queensland fruit fly, Bactrocera tryoni, are routinely monitored using cue-lure, a male-only attractant. Such monitoring provides no information about females and there is little information available to show if male and female B. tryoni numbers are correlated in the field. Using a data set of 1 148 weekly clearances of orange-ammonia baited traps, which catch both males and females, the correlation between male and female numbers was tested for 48 weeks of the year (four weeks each month) and for the combined data set. Weekly male and female trap catches were almost entirely highly correlated, regardless of mean population size or time of year. For the whole year, the correlation between male and female numbers was r = 0.722, significant at p<0.001. Results suggest that changes in the number if male B. tryoni, as detected through cue-lure sampling, will reflect changes in numbers of female B. tryoni.
Resumo:
Scalable high-resolution tiled display walls are becoming increasingly important to decision makers and researchers because high pixel counts in combination with large screen areas facilitate content rich, simultaneous display of computer-generated visualization information and high-definition video data from multiple sources. This tutorial is designed to cater for new users as well as researchers who are currently operating tiled display walls or 'OptiPortals'. We will discuss the current and future applications of display wall technology and explore opportunities for participants to collaborate and contribute in a growing community. Multiple tutorial streams will cover both hands-on practical development, as well as policy and method design for embedding these technologies into the research process. Attendees will be able to gain an understanding of how to get started with developing similar systems themselves, in addition to becoming familiar with typical applications and large-scale visualisation techniques. Presentations in this tutorial will describe current implementations of tiled display walls that highlight the effective usage of screen real-estate with various visualization datasets, including collaborative applications such as visualcasting, classroom learning and video conferencing. A feature presentation for this tutorial will be given by Jurgen Schulze from Calit2 at the University of California, San Diego. Jurgen is an expert in scientific visualization in virtual environments, human-computer interaction, real-time volume rendering, and graphics algorithms on programmable graphics hardware.
Resumo:
The briefly resurrected Marxism Today (1998), edited by Martin Jacques, sets out to deal with perceived failures of the 'Blair project' (Jacques, 1998: 2). Jacques opens the issue by reaffirming that Blair, which is to say New Labour, is the successful creation of the 'New Left' projects, the first of which began in the late-fifties and early sixties in both Britain and the US, and which were vigorously revived in the late 1980s. However, the most comprehensive debate is fairly much contained in the first three articles, written by Hobsbawm, Hall, and Mulgan, insofar as the broadest defining parameters of Third Way 'values' are addressed by these writers.
Resumo:
Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
It was reported that the manuscript of Crash was returned to the publisher with a note reading ‘The author is beyond psychiatric help’. Ballard took the lay diagnosis as proof of complete artistic success. Crash conflates the Freudian tropes of libido and thanatos, overlaying these onto the twentieth century erotic icon, the car. Beyond mere incompetent adolescent copulatory fumblings in the back seat of the parental sedan or the clichéd phallic locomotor of the mid-life Ferrari, Ballard engages the full potentialities of the automobile as the locus and sine qua non of a perverse, though functional erotic. ‘Autoeroticism’ is transformed into automotive, traumatic or surgical paraphilia, driving Helmut Newton’s insipid photo-essays of BDSM and orthopædics into an entirely new dimension, dancing precisely where (but more crucially, because) the ‘body is bruised to pleasure soul’. The serendipity of quotidian accidental collisions is supplanted, in pursuit of the fetishised object, by contrived (though not simulated) recreations of iconographic celebrity deaths. Penetration remains as a guiding trope of sexuality, but it is confounded by a perversity of focus. Such an obsessive pursuit of this autoerotic-as-reality necessitates the rejection of the law of human sexual regulation, requiring the re-interpretation of what constitutes sex itself by looking beyond or through conventional sexuality into Ballard’s paraphiliac and nightmarish consensual Other. This Other allows for (if not demands) the tangled wreckage of a sportscar to function as a transformative sexual agent, creating, of woman, a being of ‘free and perverse sexuality, releasing within its dying chromium and leaking engine-parts, all the deviant possibilities of her sex’.
Resumo:
The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.
Resumo:
Detecting query reformulations within a session by a Web searcher is an important area of research for designing more helpful searching systems and targeting content to particular users. Methods explored by other researchers include both qualitative (i.e., the use of human judges to manually analyze query patterns on usually small samples) and nondeterministic algorithms, typically using large amounts of training data to predict query modification during sessions. In this article, we explore three alternative methods for detection of session boundaries. All three methods are computationally straightforward and therefore easily implemented for detection of session changes. We examine 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005. We compare session analysis using (a) Internet Protocol address and cookie; (b) Internet Protocol address, cookie, and a temporal limit on intrasession interactions; and (c) Internet Protocol address, cookie, and query reformulation patterns. Overall, our analysis shows that defining sessions by query reformulation along with Internet Protocol address and cookie provides the best measure, resulting in an 82% increase in the count of sessions. Regardless of the method used, the mean session length was fewer than three queries, and the mean session duration was less than 30 min. Searchers most often modified their query by changing query terms (nearly 23% of all query modifications) rather than adding or deleting terms. Implications are that for measuring searching traffic, unique sessions may be a better indicator than the common metric of unique visitors. This research also sheds light on the more complex aspects of Web searching involving query modifications and may lead to advances in searching tools.