904 resultados para encoding of measurement streams


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Red tape is not desirable as it impedes business growth. Relief from the administrative burdens that businesses face due to legislation can benefit the whole economy, especially at times of recession. However, recent governmental initiatives aimed at reducing administrative burdens have encountered some success, but also failures. This article compares three national initiatives - in the Netherlands, UK and Italy - aimed at cutting red tape by using the Standard Cost Model. Findings highlight the factors affecting the outcomes of measurement and reduction plans and ways to improve the Standard Cost Model methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods of data collection are unavoidably rooted in some sort of theoretical paradigm, and are inextricably tied to an implicit agenda or broad problem framing. These prior orientations are not always explicit, but they matter for what data is collected and how it is used. They also structure opportunities for asking new questions, for linking or bridging between existing data sets and they matter even more when data is re-purposed for uses not initially anticipated. In this paper we provide an historical and comparative review of the changing categories used in organising and collecting data on mobility/travel and time use as part of ongoing work to understand, conceptualise and describe the changing patterns of domestic and mobility related energy demand within UK society. This exercise reveals systematic differences of method and approach, for instance in units of measurement, in how issues of time/duration and periodicity are handled, and how these strategies relate to the questions such data is routinely used to address. It also points to more fundamental differences in how traditions of research into mobility, domestic energy and time use have developed. We end with a discussion of the practical implications of these diverse histories for understanding and analysing changing patterns of energy/mobility demand at different scales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in our understanding of the large-scale electric and magnetic fields in the coupled magnetosphere-ionosphere system are reviewed. The literature appearing in the period January 1991–June 1993 is sorted into 8 general areas of study. The phenomenon of substorms receives the most attention in this literature, with the location of onset being the single most discussed issue. However, if the magnetic topology in substorm phases was widely debated, less attention was paid to the relationship of convection to the substorm cycle. A significantly new consensus view of substorm expansion and recovery phases emerged, which was termed the ‘Kiruna Conjecture’ after the conference at which it gained widespread acceptance. The second largest area of interest was dayside transient events, both near the magnetopause and the ionosphere. It became apparent that these phenomena include at least two classes of events, probably due to transient reconnection bursts and sudden solar wind dynamic pressure changes. The contribution of both types of event to convection is controversial. The realisation that induction effects decouple electric fields in the magnetosphere and ionosphere, on time scales shorter than several substorm cycles, calls for broadening of the range of measurement techniques in both the ionosphere and at the magnetopause. Several new techniques were introduced including ionospheric observations which yield reconnection rate as a function of time. The magnetospheric and ionospheric behaviour due to various quasi-steady interplanetary conditions was studied using magnetic cloud events. For northward IMF conditions, reverse convection in the polar cap was found to be predominantly a summer hemisphere phenomenon and even for extremely rare prolonged southward IMF conditions, the magnetosphere was observed to oscillate through various substorm cycles rather than forming a steady-state convection bay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The suggestion is discussed that characteristic particle and field signatures at the dayside magnetopause, termed “flux transfer events” (FTEs), are, in at least some cases, due to transient solar wind and/or magnetosheath dynamic pressure increases, rather than time-dependent magnetic reconnection. It is found that most individual cases of FTEs observed by a single spacecraft can, at least qualitatively, be explained by the pressure pulse model, provided a few rather unsatisfactory features of the predictions are explained in terms of measurement uncertainties. The most notable exceptions to this are some “two-regime” observations made by two satellites simultaneously, one on either side of the magnetopause. However, this configuration has not been frequently achieved for sufficient time, such observations are rare, and the relevant tests are still not conclusive. The strongest evidence that FTEs are produced by magnetic reconnection is the dependence of their occurrence on the north-south component of the interplanetary magnetic field (IMF) or of the magnetosheath field. The pressure pulse model provides an explanation for this dependence (albeit qualitative) in the case of magnetosheath FTEs, but this does not apply to magnetosphere FTEs. The only surveys of magnetosphere FTEs have not employed the simultaneous IMF, but have shown that their occurrence is strongly dependent on the north-south component of the magnetosheath field, as observed earlier/later on the same magnetopause crossing (for inbound/outbound passes, respectively). This paper employs statistics on the variability of the IMF orientation to investigate the effects of IMF changes between the times of the magnetosheath and FTE observations. It is shown that the previously published results are consistent with magnetospheric FTEs being entirely absent when the magnetosheath field is northward: all crossings with magnetosphere FTEs and a northward field can be attributed to the field changing sense while the satellite was within the magnetosphere (but close enough to the magnetopause to detect an FTE). Allowance for the IMF variability also makes the occurrence frequency of magnetosphere FTEs during southward magnetosheath fields very similar to that observed for magnetosheath FTEs. Conversely, the probability of attaining the observed occurrence frequencies for the pressure pulse model is 10−14. In addition, it is argued that some magnetosheath FTEs should, for the pressure pulse model, have been observed for northward IMF: the probability that the number is as low as actually observed is estimated to be 10−10. It is concluded that although the pressure model can be invoked to qualitatively explain a large number of individual FTE observations, the observed occurrence statistics are in gross disagreement with this model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Treffers-Daller and Korybski propose to operationalize language dominance on the basis of measures of lexical diversity, as computed, in this particular study, on transcripts of stories told by Polish-English bilinguals in each of their languages They compute four different Indices of Language Dominance (ILD) on the basis of two different measures of lexical diversity, the Index of Guiraud (Guiraud, 1954) and HD-D (McCarthy & Jarvis, 2007). They compare simple indices, which are based on subtracting scores from one language from scores for another language, to more complex indices based on the formula Birdsong borrowed from the field of handedness, namely the ratio of (Difference in Scores) / (Sum of Scores). Positive scores on each of these Indices of Language Dominance mean that informants are more English-dominant and negative scores that they are more Polish-dominant. The authors address the difficulty of comparing scores across languages by carefully lemmatizing the data. Following Flege, Mackay and Piske (2002) they also look into the validity of these indices by investigating to what extent they can predict scores on other, independently measured variables. They use correlations and regression analysis for this, which has the advantage that the dominance indices are used as continuous variables and arbitrary cut-off points between balanced and dominant bilinguals need not be chosen. However, they also show how the computation of z-scores can help facilitate a discussion about the appropriateness of different cut-off points across different data sets and measurement scales in those cases where researchers consider it necessary to make categorial distinctions between balanced and dominant bilinguals. Treffers-Daller and Korybski correlate the ILD scores with four other variables, namely Length of Residence in the UK, attitudes towards English and life in the UK, frequency of usage of English at home and frequency of code-switching. They found that the indices correlated significantly with most of these variables, but there were clear differences between the Guiraud-based indices and the HDD-based indices. In a regression analysis three of the measures were also found to be a significant predictor of English language usage at home. They conclude that the correlations and the regression analyses lend strong support to the validity of their approach to language dominance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This text extends some ideas presented in a keynote lecture of the 5th Encontro de Tipografia conference, in Barcelos, Portugal, in November 2014. The paper discusses problems of identifying the location and encoding of design decisions, the implications of digital workflows for capturing knowledge generating through design practice, and the consequences of the transformation of production tools into commodities. It concludes with a discussion of the perception of added value in typeface design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considerable progress has been made in understanding the present and future regional and global sea level in the 2 years since the publication of the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change. Here, we evaluate how the new results affect the AR5’s assessment of (i) historical sea level rise, including attribution of that rise and implications for the sea level budget, (ii) projections of the components and of total global mean sea level (GMSL), and (iii) projections of regional variability and emergence of the anthropogenic signal. In each of these cases, new work largely provides additional evidence in support of the AR5 assessment, providing greater confidence in those findings. Recent analyses confirm the twentieth century sea level rise, with some analyses showing a slightly smaller rate before 1990 and some a slightly larger value than reported in the AR5. There is now more evidence of an acceleration in the rate of rise. Ongoing ocean heat uptake and associated thermal expansion have continued since 2000, and are consistent with ocean thermal expansion reported in the AR5. A significant amount of heat is being stored deeper in the water column, with a larger rate of heat uptake since 2000 compared to the previous decades and with the largest storage in the Southern Ocean. The first formal detection studies for ocean thermal expansion and glacier mass loss since the AR5 have confirmed the AR5 finding of a significant anthropogenic contribution to sea level rise over the last 50 years. New projections of glacier loss from two regions suggest smaller contributions to GMSL rise from these regions than in studies assessed by the AR5; additional regional studies are required to further assess whether there are broader implications of these results. Mass loss from the Greenland Ice Sheet, primarily as a result of increased surface melting, and from the Antarctic Ice Sheet, primarily as a result of increased ice discharge, has accelerated. The largest estimates of acceleration in mass loss from the two ice sheets for 2003–2013 equal or exceed the acceleration of GMSL rise calculated from the satellite altimeter sea level record over the longer period of 1993–2014. However, when increased mass gain in land water storage and parts of East Antarctica, and decreased mass loss from glaciers in Alaska and some other regions are taken into account, the net acceleration in the ocean mass gain is consistent with the satellite altimeter record. New studies suggest that a marine ice sheet instability (MISI) may have been initiated in parts of the West Antarctic Ice Sheet (WAIS), but that it will affect only a limited number of ice streams in the twenty-first century. New projections of mass loss from the Greenland and Antarctic Ice Sheets by 2100, including a contribution from parts of WAIS undergoing unstable retreat, suggest a contribution that falls largely within the likely range (i.e., two thirds probability) of the AR5. These new results increase confidence in the AR5 likely range, indicating that there is a greater probability that sea level rise by 2100 will lie in this range with a corresponding decrease in the likelihood of an additional contribution of several tens of centimeters above the likely range. In view of the comparatively limited state of knowledge and understanding of rapid ice sheet dynamics, we continue to think that it is not yet possible to make reliable quantitative estimates of future GMSL rise outside the likely range. Projections of twenty-first century GMSL rise published since the AR5 depend on results from expert elicitation, but we have low confidence in conclusions based on these approaches. New work on regional projections and emergence of the anthropogenic signal suggests that the two commonly predicted features of future regional sea level change (the increasing tilt across the Antarctic Circumpolar Current and the dipole in the North Atlantic) are related to regional changes in wind stress and surface heat flux. Moreover, it is expected that sea level change in response to anthropogenic forcing, particularly in regions of relatively low unforced variability such as the low-latitude Atlantic, will be detectable over most of the ocean by 2040. The east-west contrast of sea level trends in the Pacific observed since the early 1990s cannot be satisfactorily accounted for by climate models, nor yet definitively attributed either to unforced variability or forced climate change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent study conducted by Blocken et al. (Numerical study on the existence of the Venturi effect in passages between perpendicular buildings. Journal of Engineering Mechanics, 2008,134: 1021-1028) challenged the popular view of the existence of the ‘Venturi effect’ in building passages as the wind is exposed to an open boundary. The present research extends the work of Blocken et al. (2008a) into a more general setup with the building orientation varying from 0° to 180° using CFD simulations. Our results reveal that the passage flow is mainly determined by the combination of corner streams. It is also shown that converging passages have a higher wind-blocking effect compared to diverging passages, explained by a lower wind speed and higher drag coefficient. Fluxes on the top plane of the passage volume reverse from outflow to inflow in the cases of α=135°, 150° and 165°. A simple mathematical expression to explain the relationship between the flux ratio and the geometric parameters has been developed to aid wind design in an urban neighborhood. In addition, a converging passage with α=15° is recommended for urban wind design in cold and temperate climates since the passage flow changes smoothly and a relatively lower wind speed is expected compared with that where there are no buildings. While for the high-density urban area in (sub)tropical climates such as Hong Kong where there is a desire for more wind, a diverging passage with α=150° is a better choice to promote ventilation at the pedestrian level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The degree to which habitat fragmentation affects bird incidence is species specific and may depend on varying spatial scales. Selecting the correct scale of measurement is essential to appropriately assess the effects of habitat fragmentation on bird occurrence. Our objective was to determine which spatial scale of landscape measurement best describes the incidence of three bird species (Pyriglena leucoptera, Xiphorhynchus fuscus and Chiroxiphia caudata) in the fragmented Brazilian Atlantic forest and test if multi-scalar models perform better than single-scalar ones. Bird incidence was assessed in 80 forest fragments. The surrounding landscape structure was described with four indices measured at four spatial scales (400-, 600-, 800- and 1,000-m buffers around the sample points). The explanatory power of each scale in predicting bird incidence was assessed using logistic regression, bootstrapped with 1,000 repetitions. The best results varied between species (1,000-m radius for P. leucoptera; 800-m for X. fuscus and 600-m for C. caudata), probably due to their distinct feeding habits and foraging strategies. Multi-scale models always resulted in better predictions than single-scale models, suggesting that different aspects of the landscape structure are related to different ecological processes influencing bird incidence. In particular, our results suggest that local extinction and (re)colonisation processes might simultaneously act at different scales. Thus, single-scale models may not be good enough to properly describe complex pattern-process relationships. Selecting variables at multiple ecologically relevant scales is a reasonable procedure to optimise the accuracy of species incidence models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine different phenomenological interaction models for Dark Energy and Dark Matter by performing statistical joint analysis with observational data arising from the 182 Gold type la supernova samples, the shift parameter of the Cosmic Microwave Background given by the three-year Wilkinson Microwave Anisotropy Probe observations, the baryon acoustic oscillation measurement from the Sloan Digital Sky Survey and age estimates of 35 galaxies. Including the time-dependent observable, we add sensitivity of measurement and give complementary results for the fitting. The compatibility among three different data sets seem to imply that the coupling between dark energy and dark matter is a small positive value, which satisfies the requirement to solve the coincidence problem and the second law of thermodynamics, being compatible with previous estimates. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the basic research of Chase and Simon (1973) is questioned, and we seek new results by analyzing the errors of experts and beginners chess players in experiments to reproduce chess positions. Chess players with different levels of expertise participated in the study. The results were analyzed by a Brazilian grandmaster, and quantitative analysis was performed with the use of statistical methods data mining. The results challenge significantly, the current theories of expertise, memory and decision making in this area, because the present theory predicts piece on square encoding, in which players can recognize the strategic situation reproducing it faithfully, but commit several errors that the theory can¿t explain. The current theory can¿t fully explain the encoding used by players to register a board. The errors of intermediary players preserved fragments of the strategic situation, although they have committed a series of errors in the reconstruction of the positions. The encoding of chunks therefore includes more information than that predicted by current theories. Currently, research on perception, trial and decision is heavily concentrated on the idea of pattern recognition". Based on the results of this research, we explore a change of perspective. The idea of "pattern recognition" presupposes that the processing of relevant information is on "patterns" (or data) that exist independently of any interpretation. We propose that the theory suggests the vision of decision-making via the recognition of experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the basic research of Chase and Simon (1973) is questioned, and we seek new results by analyzing the errors of experts and beginners chess players in experiments to reproduce chess positions. Chess players with different levels of expertise participated in the study. The results were analyzed by a Brazilian grandmaster, and quantitative analysis was performed with the use of statistical methods data mining. The results challenge significantly, the current theories of expertise, memory and decision making in this area, because the present theory predicts piece on square encoding, in which players can recognize the strategic situation reproducing it faithfully, but commit several errors that the theory can¿t explain. The current theory can¿t fully explain the encoding used by players to register a board. The errors of intermediary players preserved fragments of the strategic situation, although they have committed a series of errors in the reconstruction of the positions. The encoding of chunks therefore includes more information than that predicted by current theories. Currently, research on perception, trial and decision is heavily concentrated on the idea of 'pattern recognition'. Based on the results of this research, we explore a change of perspective. The idea of 'pattern recognition' presupposes that the processing of relevant information is on 'patterns' (or data) that exist independently of any interpretation. We propose that the theory suggests the vision of decision-making via the recognition of experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a highly competitive environment, the ability to retain a substantial customer base represents a tremendous competitive advantage, therefore this transaction-based emphasis in sales is increasingly being replaced by relationally focused approach. Although existing sales literature is in agreement to the theoretical composition of buyer-seller relationships, a lack of empirical evidence exists for the interrelationships of various aspects of relational selling on individual salesperson¿s performance. This paper explores the impact of interpersonal relationships on customer satisfaction and loyalty towards the firm. Based on a review of different streams of research, the paper contributes to the existing theories using a case analysis of customer behavior when there is salesperson turnover. Much of the relationship with the company comes from trust in the salesperson, which is built up through the development of the relationship. The existence of a friendship interpersonal relationship with a salesperson also increases the customer willingness to follow him in case he leaves the company, thus possibly switching to another service provider. Utilizing a case analysis method this paper has the objective to find evidences that could prove the positive or negative impact of the salesperson¿s turnover in the organizations. Finally, the paper discusses managerial implications and directions for future research.