739 resultados para Hidden homelessness
Resumo:
The dynamics of inter-regional communication within the brain during cognitive processing – referred to as functional connectivity – are investigated as a control feature for a brain computer interface. EMDPL is used to map phase synchronization levels between all channel pair combinations in the EEG. This results in complex networks of channel connectivity at all time–frequency locations. The mean clustering coefficient is then used as a descriptive feature encapsulating information about inter-channel connectivity. Hidden Markov models are applied to characterize and classify dynamics of the resulting complex networks. Highly accurate levels of classification are achieved when this technique is applied to classify EEG recorded during real and imagined single finger taps. These results are compared to traditional features used in the classification of a finger tap BCI demonstrating that functional connectivity dynamics provide additional information and improved BCI control accuracies.
Resumo:
Following the US model, the UK has seen considerable innovation in the funding, finance and procurement of real estate in the last decade. In the growing CMBS market asset backed securitisations have included $2.25billion secured on the Broadgate office development and issues secured on Canary Wharf and the Trafford Centre regional mall. Major occupiers (retailer Sainsbury’s, retail bank Abbey National) have engaged in innovative sale & leaseback and outsourcing schemes. Strong claims are made concerning the benefits of such schemes – e.g. British Land were reported to have reduced their weighted cost of debt by 150bp as a result of the Broadgate issue. The paper reports preliminary findings from a project funded by the Corporation of London and the RICS Research Foundation examining a number of innovative schemes to identify, within a formal finance framework, sources of added value and hidden costs. The analysis indicates that many of the gains claimed conceal costs – in terms of market value of debt or flexibility of management – while others result from unusual firm or market conditions (for example utilising the UK long lease and the unusual shape of the yield curve). Nonetheless, there are real gains resulting from the innovations, reflecting arbitrage and institutional constraints in the direct (private) real estate market
Resumo:
Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Following earlier work looking at overall career difficulties and low economic rewards faced by graduates in creative disciplines, the paper takes a closer look into the different career patterns and economic performance of “Bohemian” graduates across different creative disciplines. While it is widely acknowledged in the literature that careers in the creative field tend to be unstructured, often relying on part-time work and low wages, our knowledge of how these characteristics differ across the creative industries and occupational sectors is very limited. The paper explores the different trajectory and career patterns experienced by graduates in different creative disciplinary fields and their ability to enter creative occupations. Data from the Higher Education Statistical Agency (HESA) are presented, articulating a complex picture of the reality of finding a creative occupation for creative graduates. While students of some disciplines struggle to find full-time work in the creative economy, for others full-time occupation is the norm. Geography plays a crucial role also in offering graduates opportunities in creative occupations and higher salaries. The findings are contextualised in the New Labour cultural policy framework and conclusions are drawn on whether the creative industries policy construct has hidden a very problematic reality of winners and losers in the creative economy.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
Previous studies using coupled general circulation models (GCMs) suggest that the atmosphere model plays a dominant role in the modeled El Nin ̃ o–Southern Oscillation (ENSO), and that intermodel differences in the thermodynamical damping of sea surface temperatures (SSTs) are a dominant contributor to the ENSO amplitude diversity. This study presents a detailed analysis of the shortwave flux feedback (aSW) in 12 Coupled Model Intercomparison Project phase 3 (CMIP3) simulations, motivated by findings that aSW is the primary contributor to model thermodynamical damping errors. A ‘‘feedback decomposition method,’’ developed to elucidate the aSW biases, shows that all models un- derestimate the dynamical atmospheric response to SSTs in the eastern equatorial Pacific, leading to un- derestimated aSW values. Biases in the cloud response to dynamics and the shortwave interception by clouds also contribute to errors in aSW. Changes in the aSW feedback between the coupled and corresponding atmosphere-only simulations are related to changes in the mean dynamics. A large nonlinearity is found in the observed and modeled SW flux feedback, hidden when linearly cal- culating aSW. In the observations, two physical mechanisms are proposed to explain this nonlinearity: 1) a weaker subsidence response to cold SST anomalies than the ascent response to warm SST anomalies and 2) a nonlinear high-level cloud cover response to SST. The shortwave flux feedback nonlinearity tends to be underestimated by the models, linked to an underestimated nonlinearity in the dynamical response to SST. The process-based methodology presented in this study may help to correct model ENSO atmospheric biases, ultimately leading to an improved simulation of ENSO in GCMs.
Resumo:
There has been an increasing interest in the impact of individual well-being on the attitudes and actions of people receiving services designed to offer support. If well-being factors are important in the uptake and success of service programmes it is important that the nature of the relationships involved is understood by service designers and implementers. As a contribution to understanding, this paper examines the impact of well-being on the uptake of intervention programmes for homeless people. From the literature on well-being a number of factors are identified that contribute towards overall well-being, which include personal efficacy and identity, but also more directly well-being can be viewed as personal or group/collective esteem. The impact of these factors on service use is assessed by means of two studies of homelessness service users, comparing the implementation of two research tools: a shortened and a fuller one. The conclusions are that the factors identified are related to service use. The higher the collective esteem – esteem drawn from identification with services and their users and providers – and the less that they feel isolated, the more benefits that homeless people will perceive with service use, and in turn the more likely they are to be motivated to use services. However, the most important factors in explaining service use are a real sense that it is appropriate to accept social support from others, a rejection of the social identity as homeless but a cultivation of being valued as part of a non-homeless community, and a positive perception of the impact of the service.
Resumo:
A number of critiques have been published drawing attention to the gaps in research methods applied to issues surrounding homelessness and service utilisation in Britain. This paper discusses the use of social identity, a theory drawn from the field of applied social psychology, and synthesises it with the pathways model, thereby providing a framework to further explore service utilisation. The synthesised framework was used to predict the uptake of outreach services in a prospective study of 121 homeless people in a major UK city. In general, homeless people's use of intervention services was affected by the extent to which they identified with the support services themselves. The study demonstrates the central role of social identity in understanding service utilisation patterns, and shows the importance of applying fresh techniques to fine-tune our understanding of uptake in the long term.
Resumo:
The divide between agency and structural explanations of the causes of social phenomena has dominated research on housing as in other social fields. However, there has been some research that has sought to transcend this schism by combining agency and structural dimensions in the understanding of housing processes and outcomes. The article reviews the two most common approaches to doing this in housing research – structuration (following the work of Giddens) and critical realism. The example of research on homelessness is used to show how the approaches have been applied to housing issues.
Resumo:
In this paper, we propose a novel online modeling algorithm for nonlinear and nonstationary systems using a radial basis function (RBF) neural network with a fixed number of hidden nodes. Each of the RBF basis functions has a tunable center vector and an adjustable diagonal covariance matrix. A multi-innovation recursive least square (MRLS) algorithm is applied to update the weights of RBF online, while the modeling performance is monitored. When the modeling residual of the RBF network becomes large in spite of the weight adaptation, a node identified as insignificant is replaced with a new node, for which the tunable center vector and diagonal covariance matrix are optimized using the quantum particle swarm optimization (QPSO) algorithm. The major contribution is to combine the MRLS weight adaptation and QPSO node structure optimization in an innovative way so that it can track well the local characteristic in the nonstationary system with a very sparse model. Simulation results show that the proposed algorithm has significantly better performance than existing approaches.
Resumo:
Recent evidence suggests that the mirror neuron system responds to the goals of actions, even when the end of the movement is hidden from view. To investigate whether this predictive ability might be based on the detection of early differences between actions with different outcomes, we used electromyography (EMG) and motion tracking to assess whether two actions with different goals (grasp to eat and grasp to place) differed from each other in their initial reaching phases. In a second experiment, we then tested whether observers could detect early differences and predict the outcome of these movements, based on seeing only part of the actions. Experiment 1 revealed early kinematic differences between the two movements, with grasp-to-eat movements characterised by an earlier peak acceleration, and different grasp position, compared to grasp-to-place movements. There were also significant differences in forearm muscle activity in the reaching phase of the two actions. The behavioural data arising from Experiments 2a and 2b indicated that observers are not able to predict whether an object is going to be brought to the mouth or placed until after the grasp has been completed. This suggests that the early kinematic differences are either not visible to observers, or that they are not used to predict the end-goals of actions. These data are discussed in the context of the mirror neuron system
Resumo:
Local, tacit and normally unspoken OHS (occupational health and safety) knowledge and practices can too easily be excluded from or remain below the industry horizon of notice, meaning that they remain unaccounted for in formal OHS policy and practice. In this article we stress the need to more systematically and routinely tap into these otherwise ‘hidden’ communication channels, which are central to how everyday safe working practices are achieved. To demonstrate this approach this paper will draw on our ethnographic research with a gang of migrant curtain wall installers on a large office development project in the north of England. In doing so we reflect on the practice-based nature of learning and sharing OHS knowledge through examples of how workers’ own patterns of successful communication help avoid health and safety problems. These understandings, we argue, can be advanced as a basis for the development of improved OHS measures, and of organizational knowing and learning.
Resumo:
The UK Department for Environment, Food and Rural Affairs (Defra) identified practices to reduce the risk of animal disease outbreaks. We report on the response of sheep and pig farmers in England to promotion of these practices. A conceptual framework was established from research on factors influencing adoption of animal health practices, linking knowledge, attitudes, social influences and perceived constraints to the implementation of specific practices. Qualitative data were collected from nine sheep and six pig enterprises in 2011. Thematic analysis explored attitudes and responses to the proposed practices, and factors influencing the likelihood of implementation. Most feel they are doing all they can reasonably do to minimise disease risk and that practices not being implemented are either not relevant or ineffective. There is little awareness and concern about risk from unseen threats. Pig farmers place more emphasis than sheep farmers on controlling wildlife, staff and visitor management and staff training. The main factors that influence livestock farmers’ decision on whether or not to implement a specific disease risk measure are: attitudes to, and perceptions of, disease risk; attitudes towards the specific measure and its efficacy; characteristics of the enterprise which they perceive as making a measure impractical; previous experience of a disease or of the measure; and the credibility of information and advice. Great importance is placed on access to authoritative information with most seeing vets as the prime source to interpret generic advice from national bodies in the local context. Uptake of disease risk measures could be increased by: improved risk communication through the farming press and vets to encourage farmers to recognise hidden threats; dissemination of credible early warning information to sharpen farmers’ assessment of risk; and targeted information through training events, farming press, vets and other advisers, and farmer groups, tailored to the different categories of livestock farmer.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
We investigate for 26 OECD economies whether their current account imbalances to GDP are driven by stochastic trends. Regarding bounded stationarity as the more natural counterpart of sustainability, results from Phillips–Perron tests for unit root and bounded unit root processes are contrasted. While the former hint at stationarity of current account imbalances for 12 economies, the latter indicate bounded stationarity for only six economies. Through panel-based test statistics, current account imbalances are diagnosed as bounded non-stationary. Thus, (spurious) rejections of the unit root hypothesis might be due to the existence of bounds reflecting hidden policy controls or financial crises.