982 resultados para Free markets
Resumo:
In October 2012, Simone presented her book Architecture for a Free Subjectivity to the University of Michigan, Taubman College of Architecture and Urban Planning. This book explores the architectural significance of Deleuze’s philosophy of subjectivization, and Guattari’s overlooked dialogue on architecture and subjectivity. In doing so, it proposes that subjectivity is no longer the exclusive provenance of human beings, but extends to the architectural, the cinematic, the erotic, and the political. It defines a new position within the literature on Deleuze and architecture, while highlighting the neglected issue of subjectivity in contemporary discussion.
Resumo:
This paper will develop and illustrate a concept of institutional viscosity to balance the more agentive concept of motility with a theoretical account of structural conditions. The argument articulates with two bodies of work: Archer’s (2007, 2012) broad social theory of reflexivity as negotiating agency and social structures; and Urry’s (2007) sociology of mobility and mobility systems. It then illustrates the concept of viscosity as a variable (low to high viscosity) through two empirical studies conducted in the sociology of education that help demonstrate how degrees of viscosity interact with degrees of motility, and how this interaction can impact on motility over time. The first study explored how Australian Defence Force families cope with their children’s disrupted education given frequent forced relocations. The other study explored how middle class professionals relate to career and educational opportunities in rural and remote Queensland. These two life conditions have produced very different institutional practices to make relocations thinkable and doable, by variously constraining or enabling mobility. In turn, the degrees of viscosity mobile individuals meet with over time can erode or elevate their motility.
Resumo:
This case study examines the way in which Knowledge Unlatched is combining collective action and open access licenses to encourage innovation in markets for specialist academic books. Knowledge Unlatched is a not for profit organisation that has been established to help a global community of libraries coordinate their book purchasing activities more effectively and, in so doing, to ensure that books librarians select for their own collections become available for free for anyone in the world to read. The Knowledge Unlatched model is an attempt to re-coordinate a market in order to facilitate a transition to digitally appropriate publishing models that include open access. It offers librarians an opportunity to facilitate the open access publication of books that their own readers would value access to. It provides publishers with a stable income stream on titles selected by libraries, as well as an ability to continue selling books to a wider market on their own terms. Knowledge Unlatched provides a rich case study for researchers and practitioners interested in understanding how innovations in procurement practices can be used to stimulate more effective, equitable markets for socially valuable products.
Resumo:
Frances Pinter and I have been visiting fellows at the Big Innovation Centre for more than a year now. Tucked away in a corner, inspired by BIC’s open innovation vision, we have been attempting to solve a problem that continues to perplex many in the era of digital affordance: creating sustainable markets for high quality new content that include free access for end users.
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
Purpose. To establish a simple and rapid analytical method, based on direct insertion/electron ionization-mass spectrometry (DI/EI-MS), for measuring free cholesterol in tears from humans and rabbits. Methods. A stable-isotope dilution protocol employing DI/EI-MS in selected ion monitoring mode was developed and validated. It was used to quantify the free cholesterol content in human and rabbit tear extracts. Tears were collected from adult humans (n = 15) and rabbits (n = 10) and lipids extracted. Results. Screening, full-scan (m/z 40-600) DI/EI-MS analysis of crude tear extracts showed that diagnostic ions located in the mass range m/z 350 to 400 were those derived from free cholesterol, with no contribution from cholesterol esters. DI/EI-MS data acquired using selected ion monitoring (SIM) were analyzed for the abundance ratios of diagnostic ions with their stable isotope-labeled analogues arising from the D6-cholesterol internal standard. Standard curves of good linearity were produced and an on-probe limit of detection of 3 ng (at 3:1 signal to noise) and limit of quantification of 8 ng (at 10:1 signal to noise). The concentration of free cholesterol in human tears was 15 ± 6 μg/g, which was higher than in rabbit tears (10 ± 5 μg/g). Conclusions. A stable-isotope dilution DI/EI-SIM method for free cholesterol quantification without prior chromatographic separation was established. Using this method demonstrated that humans have higher free cholesterol levels in their tears than rabbits. This is in agreement with previous reports. This paper provides a rapid and reliable method to measure free cholesterol in small-volume clinical samples. © 2013 The Association for Research in Vision and Ophthalmology, Inc.
Jacobian-free Newton-Krylov methods with GPU acceleration for computing nonlinear ship wave patterns
Resumo:
The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-dimensional ship wave patterns, such as the shape of steep waves close to their limiting configuration, in a manner that has been possible in the two-dimensional analogue for some time.
Resumo:
This article presents new theoretical and empirical evidence on the forecasting ability of prediction markets. We develop a model that predicts that the time until expiration of a prediction market should negatively affect the accuracy of prices as a forecasting tool in the direction of a ‘favourite/longshot bias’. That is, high-likelihood events are underpriced, and low-likelihood events are over-priced. We confirm this result using a large data set of prediction market transaction prices. Prediction markets are reasonably well calibrated when time to expiration is relatively short, but prices are significantly biased for events farther in the future. When time value of money is considered, the miscalibration can be exploited to earn excess returns only when the trader has a relatively low discount rate.
Resumo:
In this paper we introduce and discuss the nature of free-play in the context of three open-ended interactive art installation works. We observe the interaction work of situated free-play of the participants in these environments and, building on precedent work, devise a set of sensitising terms derived both from the literature and from what we observe from participants interacting there. These sensitising terms act as guides and are designed to be used by those who experience, evaluate or report on open-ended interactive art. That is, we propose these terms as a common-ground language to be used by participants communicating while in the art work to describe their experience, by researchers in the various stages of research process (observation, coding activity, analysis, reporting, and publication), and by inter-disciplinary researchers working across the fields of HCI and art. This work builds a foundation for understanding the relationship between free-play, open-ended environments, and interactive installations and contributes sensitising terms useful for the HCI community for discussion and analysis of open-ended interactive art works.
Resumo:
The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.
Resumo:
Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.
Resumo:
The past decade has seen an increase in the number of significant natural disasters that have caused considerable loss of life as well as damage to all property markets in the affected areas. In many cases, these natural disasters have not only caused significant property damage, but in numerous cases, have resulted in the total destruction of the property in the location. With these disasters attracting considerable media attention, the public are more aware of where these affected property markets are, as well as the overall damage to properties that have been damaged or destroyed. This heightened level of awareness has to have an impact on the participants in the property market, whether a developer, vendor seller or investor. To assess this issue, a residential property market that has been affected by a significant natural disaster over the past 2 years has been analysed to determine the overall impact of the disaster on buyer, renter and vendor behaviour, as well as prices in these residential markets. This paper is based on data from the Brisbane flood in January 2011. This natural disaster resulted in loss of life and partial and total devastation of considerable residential property sectors. Data for the research have been based on the residential sales and rental listings for each week of the study period to determine the level of activity in the specific property sectors, and these are also compared to the median house prices for the various suburbs for the same period based on suburbs being either flood affected or flood free. As there are 48 suburbs included in the study, it has been possible to group these suburbs on a socio-economic basis to determine possible differences due to location and value. Data were accessed from realestate.com.au, a free real estate site that provides details of current rental and sales listings on a suburb basis, RP Data a commercial property sales database and the Australian Bureau of Statistics. The paper found that sales listings fell immediately after the flood in the affected areas, but there was no corresponding fall or increase in sales listings in the flood-free suburbs. There was a significant decrease in the number of rental listings follow the flood as affected parties sought alternate accommodation. The greatest fall in rental listings was in areas close to the flood-affected suburbs indicating the desire to be close to the flooded property during the repair period.
Resumo:
The article introduces a novel platform for conducting controlled and risk-free driving and traveling behavior studies, called Cyber-Physical System Simulator (CPSS). The key features of CPSS are: (1) simulation of multiuser immersive driving in a threedimensional (3D) virtual environment; (2) integration of traffic and communication simulators with human driving based on dedicated middleware; and (3) accessibility of multiuser driving simulator on popular software and hardware platforms. This combination of features allows us to easily collect large-scale data on interesting phenomena regarding the interaction between multiple user drivers, which is not possible with current single-user driving simulators. The core original contribution of this article is threefold: (1) we introduce a multiuser driving simulator based on DiVE, our original massively multiuser networked 3D virtual environment; (2) we introduce OpenV2X, a middleware for simulating vehicle-to-vehicle and vehicle to infrastructure communication; and (3) we present two experiments based on our CPSS platform. The first experiment investigates the “rubbernecking” phenomenon, where a platoon of four user drivers experiences an accident in the oncoming direction of traffic. Second, we report on a pilot study about the effectiveness of a Cooperative Intelligent Transport Systems advisory system.
Resumo:
Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.