63 resultados para encoding of measurement streams
Resumo:
The self-memory relationship is thought to be bidirectional, in such a way that memories provide context for the self, and equally, the self exercises control over retrieval (Conway, 2005). Autobiographical memories are not distributed equally across the life span; instead, memories peak between ages 10 and 30. This reminiscence bump has been suggested to support the emergence of a stable and enduring self. In the present study, the relationship between memory accessibility and self was explored with a novel methodology that used generation of self images in the form of I am statements. Memories generated from I am cues clustered around the time of emergence for that particular self image. We argue that, when a new self-image is formed, it is associated with the encoding of memories that are relevant to that self and that remain highly accessible to the rememberer later in life. This study offers a new methodology for academics and clinicians interested in the relationship between memory and identity.
Resumo:
This research establishes the feasibility of using a network centric technology, Jini, to provide a grid framework on which to perform parallel video encoding. A solution was implemented using Jini and obtained real-time on demand encoding of a 480 HD video stream. Further, a projection is made concerning the encoding of 1080 HD video in real-time, as the current grid was not powerful enough to achieve this above 15fps. The research found that Jini is able to provide a number of tools and services highly applicable in a grid environment. It is also suitable in terms of performance and responds well to a varying number of grid nodes. The main performance limiter was found to be the network bandwidth allocation, which when loaded with a large number of grid nodes was unable to handle the traffic.
Resumo:
The use of MPT in the construction real estate portfolios has two serious limitations when used in an ex-ante framework: (1) the intertemporal instability of the portfolio weights and (2) the sharp deterioration in performance of the optimal portfolios outside the sample period used to estimate asset mean returns. Both problems can be traced to wide fluctuations in sample means Jorion (1985). Thus the use of a procedure that ignores the estimation risk due to the uncertain in mean returns is likely to produce sub-optimal results in subsequent periods. This suggests that the consideration of the issue of estimation risk is crucial in the use of MPT in developing a successful real estate portfolio strategy. Therefore, following Eun & Resnick (1988), this study extends previous ex-ante based studies by evaluating optimal portfolio allocations in subsequent test periods by using methods that have been proposed to reduce the effect of measurement error on optimal portfolio allocations.
Resumo:
This paper identifies the long-term rental depreciation rates for UK commercial properties and rates of capital expenditure incurred to offset depreciation over the same period. It starts by reviewing the economic depreciation literature and the rationale for adopting a longitudinal method of measurement, before discussing the data used and results. Data from 1993 to 2009 were sourced from Investment Property Databank and CB Richard Ellis real estate consultants. This is used to compare the change in values of new buildings in different locations with the change in values of individual properties in those locations. The analysis is conducted using observations on 742 assets drawn from all major segments of the commercial real estate market. Overall rental depreciation and capital expenditure rates are similar to those in other recent UK studies. Depreciation rates are 0.8% pa for offices, 0.5% pa for industrial properties and 0.3% pa for standard retail properties. These results hide interesting variations at a segment level, notably in retail where location often dominates value rather than the building. The majority of properties had little (if any) money spent on them over the last 16 years, but those subject to higher rates of expenditure were found to have lower depreciation rates.
Resumo:
In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.
Resumo:
The waste materials generated in the nuclear fuel cycle are very varied,ranging from the tailings arising from mining and processing uranium ore, depleted uranium in a range of chemical forms, to a range of process wastes of differing activities and properties. Indeed, the wastes generated are intimately linked to the options selected in operating the nuclear fuel cycle, most obviously to the management of spent fuel. An open fuel cycle implies the disposal of highly radioactive spent fuel, whereas a closed fuel cycle generates a complex array of waste streams. On the other hand, a closed fuel cycle offers options for waste management, for example reduction in highly active waste volume, decreased radiotoxicity, and removal of fissile material. Many technological options have been proposed or explored, and each brings its own particular mix of wastes and environmental challenges.
Resumo:
Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.
Resumo:
Red tape is not desirable as it impedes business growth. Relief from the administrative burdens that businesses face due to legislation can benefit the whole economy, especially at times of recession. However, recent governmental initiatives aimed at reducing administrative burdens have encountered some success, but also failures. This article compares three national initiatives - in the Netherlands, UK and Italy - aimed at cutting red tape by using the Standard Cost Model. Findings highlight the factors affecting the outcomes of measurement and reduction plans and ways to improve the Standard Cost Model methodology.
Resumo:
Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.
Resumo:
Methods of data collection are unavoidably rooted in some sort of theoretical paradigm, and are inextricably tied to an implicit agenda or broad problem framing. These prior orientations are not always explicit, but they matter for what data is collected and how it is used. They also structure opportunities for asking new questions, for linking or bridging between existing data sets and they matter even more when data is re-purposed for uses not initially anticipated. In this paper we provide an historical and comparative review of the changing categories used in organising and collecting data on mobility/travel and time use as part of ongoing work to understand, conceptualise and describe the changing patterns of domestic and mobility related energy demand within UK society. This exercise reveals systematic differences of method and approach, for instance in units of measurement, in how issues of time/duration and periodicity are handled, and how these strategies relate to the questions such data is routinely used to address. It also points to more fundamental differences in how traditions of research into mobility, domestic energy and time use have developed. We end with a discussion of the practical implications of these diverse histories for understanding and analysing changing patterns of energy/mobility demand at different scales.
Resumo:
Advances in our understanding of the large-scale electric and magnetic fields in the coupled magnetosphere-ionosphere system are reviewed. The literature appearing in the period January 1991–June 1993 is sorted into 8 general areas of study. The phenomenon of substorms receives the most attention in this literature, with the location of onset being the single most discussed issue. However, if the magnetic topology in substorm phases was widely debated, less attention was paid to the relationship of convection to the substorm cycle. A significantly new consensus view of substorm expansion and recovery phases emerged, which was termed the ‘Kiruna Conjecture’ after the conference at which it gained widespread acceptance. The second largest area of interest was dayside transient events, both near the magnetopause and the ionosphere. It became apparent that these phenomena include at least two classes of events, probably due to transient reconnection bursts and sudden solar wind dynamic pressure changes. The contribution of both types of event to convection is controversial. The realisation that induction effects decouple electric fields in the magnetosphere and ionosphere, on time scales shorter than several substorm cycles, calls for broadening of the range of measurement techniques in both the ionosphere and at the magnetopause. Several new techniques were introduced including ionospheric observations which yield reconnection rate as a function of time. The magnetospheric and ionospheric behaviour due to various quasi-steady interplanetary conditions was studied using magnetic cloud events. For northward IMF conditions, reverse convection in the polar cap was found to be predominantly a summer hemisphere phenomenon and even for extremely rare prolonged southward IMF conditions, the magnetosphere was observed to oscillate through various substorm cycles rather than forming a steady-state convection bay.
Resumo:
The suggestion is discussed that characteristic particle and field signatures at the dayside magnetopause, termed “flux transfer events” (FTEs), are, in at least some cases, due to transient solar wind and/or magnetosheath dynamic pressure increases, rather than time-dependent magnetic reconnection. It is found that most individual cases of FTEs observed by a single spacecraft can, at least qualitatively, be explained by the pressure pulse model, provided a few rather unsatisfactory features of the predictions are explained in terms of measurement uncertainties. The most notable exceptions to this are some “two-regime” observations made by two satellites simultaneously, one on either side of the magnetopause. However, this configuration has not been frequently achieved for sufficient time, such observations are rare, and the relevant tests are still not conclusive. The strongest evidence that FTEs are produced by magnetic reconnection is the dependence of their occurrence on the north-south component of the interplanetary magnetic field (IMF) or of the magnetosheath field. The pressure pulse model provides an explanation for this dependence (albeit qualitative) in the case of magnetosheath FTEs, but this does not apply to magnetosphere FTEs. The only surveys of magnetosphere FTEs have not employed the simultaneous IMF, but have shown that their occurrence is strongly dependent on the north-south component of the magnetosheath field, as observed earlier/later on the same magnetopause crossing (for inbound/outbound passes, respectively). This paper employs statistics on the variability of the IMF orientation to investigate the effects of IMF changes between the times of the magnetosheath and FTE observations. It is shown that the previously published results are consistent with magnetospheric FTEs being entirely absent when the magnetosheath field is northward: all crossings with magnetosphere FTEs and a northward field can be attributed to the field changing sense while the satellite was within the magnetosphere (but close enough to the magnetopause to detect an FTE). Allowance for the IMF variability also makes the occurrence frequency of magnetosphere FTEs during southward magnetosheath fields very similar to that observed for magnetosheath FTEs. Conversely, the probability of attaining the observed occurrence frequencies for the pressure pulse model is 10−14. In addition, it is argued that some magnetosheath FTEs should, for the pressure pulse model, have been observed for northward IMF: the probability that the number is as low as actually observed is estimated to be 10−10. It is concluded that although the pressure model can be invoked to qualitatively explain a large number of individual FTE observations, the observed occurrence statistics are in gross disagreement with this model.
Resumo:
Treffers-Daller and Korybski propose to operationalize language dominance on the basis of measures of lexical diversity, as computed, in this particular study, on transcripts of stories told by Polish-English bilinguals in each of their languages They compute four different Indices of Language Dominance (ILD) on the basis of two different measures of lexical diversity, the Index of Guiraud (Guiraud, 1954) and HD-D (McCarthy & Jarvis, 2007). They compare simple indices, which are based on subtracting scores from one language from scores for another language, to more complex indices based on the formula Birdsong borrowed from the field of handedness, namely the ratio of (Difference in Scores) / (Sum of Scores). Positive scores on each of these Indices of Language Dominance mean that informants are more English-dominant and negative scores that they are more Polish-dominant. The authors address the difficulty of comparing scores across languages by carefully lemmatizing the data. Following Flege, Mackay and Piske (2002) they also look into the validity of these indices by investigating to what extent they can predict scores on other, independently measured variables. They use correlations and regression analysis for this, which has the advantage that the dominance indices are used as continuous variables and arbitrary cut-off points between balanced and dominant bilinguals need not be chosen. However, they also show how the computation of z-scores can help facilitate a discussion about the appropriateness of different cut-off points across different data sets and measurement scales in those cases where researchers consider it necessary to make categorial distinctions between balanced and dominant bilinguals. Treffers-Daller and Korybski correlate the ILD scores with four other variables, namely Length of Residence in the UK, attitudes towards English and life in the UK, frequency of usage of English at home and frequency of code-switching. They found that the indices correlated significantly with most of these variables, but there were clear differences between the Guiraud-based indices and the HDD-based indices. In a regression analysis three of the measures were also found to be a significant predictor of English language usage at home. They conclude that the correlations and the regression analyses lend strong support to the validity of their approach to language dominance.