776 resultados para perception of time
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
A natural approach to representing and reasoning about temporal propositions (i.e., statements with time-dependent truth-values) is to associate them with time elements. In the literature, there are three choices regarding the primitive for the ontology of time: (1) instantaneous points, (2) durative intervals and (3) both points and intervals. Problems may arise when one conflates different views of temporal structure and questions whether some certain types of temporal propositions can be validly and meaningfully associated with different time elements. In this paper, we shall summarize an ontological glossary with respect to time elements, and diversify a wider range of meta-predicates for ascribing temporal propositions to time elements. Based on these, we shall also devise a versatile categorization of temporal propositions, which can subsume those representative categories proposed in the literature, including that of Vendler, of McDermott, of Allen, of Shoham, of Galton and of Terenziani and Torasso. It is demonstrated that the new categorization of propositions, together with the proposed range of meta-predicates, provides the expressive power for modeling some typical temporal terms/phenomena, such as starting-instant, stopping-instant, dividing-instant, instigation, termination and intermingling etc.
Resumo:
Time-series and sequences are important patterns in data mining. Based on an ontology of time-elements, this paper presents a formal characterization of time-series and state-sequences, where a state denotes a collection of data whose validation is dependent on time. While a time-series is formalized as a vector of time-elements temporally ordered one after another, a state-sequence is denoted as a list of states correspondingly ordered by a time-series. In general, a time-series and a state-sequence can be incomplete in various ways. This leads to the distinction between complete and incomplete time-series, and between complete and incomplete state-sequences, which allows the expression of both absolute and relative temporal knowledge in data mining.
Resumo:
The Digital Art Weeks PROGRAM (DAW06) is concerned with the application of digital technology in the arts. Consisting again this year of symposium, workshops and performances, the program offers insight into current research and innovations in art and technology as well as illustrating resulting synergies in a series of performances, making artists aware of impulses in technology and scientists aware of the possibilities of the application of technology in the arts.
Resumo:
This paper presents the perception of practitioners of the impact of the Moser Committee recommendations and the Skills for Life agenda it generated. The paper further explores areas of convergence and divergence between practitioners’ perceptions and the underpinning values of the Moser Committee recommendations. The study utilised a range of research tools including an online questionnaire, documentary analysis and elements of discourse analysis in the collection and analysis of data. It found that there is substantial divergence between the perception of practitioners and the values underpinning policy. It concludes by suggesting that a varying perception of what constitutes sustainable education and the lack of input from practitioners into policy might be responsible for this significant divergence of opinion and also raised a question on the perceived role of practitioners in the policy‐making process.
Resumo:
We study information rates of time-varying flat-fading channels (FFC) modeled as finite-state Markov channels (FSMC). FSMCs have two main applications for FFCs: modeling channel error bursts and decoding at the receiver. Our main finding in the first application is that receiver observation noise can more adversely affect higher-order FSMCs than lower-order FSMCs, resulting in lower capacities. This is despite the fact that the underlying higher-order FFC and its corresponding FSMC are more predictable. Numerical analysis shows that at low to medium SNR conditions (SNR lsim 12 dB) and at medium to fast normalized fading rates (0.01 lsim fDT lsim 0.10), FSMC information rates are non-increasing functions of memory order. We conclude that BERs obtained by low-order FSMC modeling can provide optimistic results. To explain the capacity behavior, we present a methodology that enables analytical comparison of FSMC capacities with different memory orders. We establish sufficient conditions that predict higher/lower capacity of a reduced-order FSMC, compared to its original high-order FSMC counterpart. Finally, we investigate the achievable information rates in FSMC-based receivers for FFCs. We observe that high-order FSMC modeling at the receiver side results in a negligible information rate increase for normalized fading rates fDT lsim 0.01.
Resumo:
In 2000 a Review of Current Marine Observations in relation to present and future needs was undertaken by the Inter-Agency Committee for Marine Science and Technology (IACMST). The Marine Environmental Change Network (MECN) was initiated in 2002 as a direct response to the recommendations of the report. A key part of the current phase of the MECN is to ensure that information from the network is provided to policy makers and other end-users to enable them to produce more accurate assessments of ecosystem state and gain a clearer understanding of factors influencing change in marine ecosystems. The MECN holds workshops on an annual basis, bringing together partners maintaining time-series and long-term datasets as well as end-users interested in outputs from the network. It was decided that the first workshop of the MECN continuation phase should consist of an evaluation of the time series and data sets maintained by partners in the MECN with regard to their ‘fit for purpose’ for answering key science questions and informing policy development. This report is based on the outcomes of the workshop. Section one of the report contains a brief introduction to monitoring, time series and long-term datasets. The various terms are defined and the need for MECN type data to complement compliance monitoring programmes is discussed. Outlines are also given of initiatives such as the United Kingdom Marine Monitoring and Assessment Strategy (UKMMAS) and Oceans 2025. Section two contains detailed information for each of the MECN time series / long-term datasets including information on scientific outputs and current objectives. This information is mainly based on the presentations given at the workshop and therefore follows a format whereby the following headings are addressed: Origin of time series including original objectives; current objectives; policy relevance; products (advice, publications, science and society). Section three consists of comments made by the review panel concerning all the time series and the network. Needs or issues highlighted by the panel with regard to the future of long-term datasets and time-series in the UK are shown along with advice and potential solutions where offered. The recommendations are divided into 4 categories; ‘The MECN and end-user requirements’; ‘Procedures & protocols’; ‘Securing data series’ and ‘Future developments’. Ever since marine environmental protection issues really came to the fore in the 1960s, it has been recognised that there is a requirement for a suitable evidence base on environmental change in order to support policy and management for UK waters. Section four gives a brief summary of the development of marine policy in the UK along with comments on the availability and necessity of long-term marine observations for the implementation of this policy. Policy relating to three main areas is discussed; Marine Conservation (protecting biodiversity and marine ecosystems); Marine Pollution and Fisheries. The conclusion of this section is that there has always been a specific requirement for information on long-term change in marine ecosystems around the UK in order to address concerns over pollution, fishing and general conservation. It is now imperative that this need is addressed in order for the UK to be able to fulfil its policy commitments and manage marine ecosystems in the light of climate change and other factors.
Resumo:
The contract work has demonstrated that older data can be assessed and entered into the MR format. Older data has associated problems but is retrievable. The contract successfully imported all datasets as required. MNCR survey sheets fit well into the MR format. The data validation and verification process can be improved. A number of computerised short cuts can be suggested and the process made more intuitive. Such a move is vital if MR is to be adopted as a standard by the recording community both on a voluntary level and potentially by consultancies.
Resumo:
During lateral leg raising, a synergistic inclination of the supporting leg and trunk in the opposite direction to the leg movement is performed in order to preserve equilibrium. As first hypothesized by Pagano and Turvey (J Exp Psychol Hum Percept Perform, 1995, 21:1070-1087), the perception of limb orientation could be based on the orientation of the limb's inertia tensor. The purpose of this study was thus to explore whether the final upper body orientation (trunk inclination relative to vertical) depends on changes in the trunk inertia tensor. We imposed a loading condition, with total mass of 4 kg added to the subject's trunk in either a symmetrical or asymmetrical configuration. This changed the orientation of the trunk inertia tensor while keeping the total trunk mass constant. In order to separate any effects of the inertia tensor from the effects of gravitational torque, the experiment was carried out in normo- and microgravity. The results indicated that in normogravity the same final upper body orientation was maintained irrespective of the loading condition. In microgravity, regardless of loading conditions the same (but different from the normogravity) orientation of the upper body was achieved through different joint organizations: two joints (the hip and ankle joints of the supporting leg) in the asymmetrical loading condition, and one (hip) in the symmetrical loading condition. In order to determine whether the different orientations of the inertia tensor were perceived during the movement, the interjoint coordination was quantified by performing a principal components analysis (PCA) on the supporting and moving hips and on the supporting ankle joints. It was expected that different loading conditions would modify the principal component of the PCA. In normogravity, asymmetrical loading decreased the coupling between joints, while in microgravity a strong coupling was preserved whatever the loading condition. It was concluded that the trunk inertia tensor did not play a role during the lateral leg raising task because in spite of the absence of gravitational torque the final upper body orientation and the interjoint coupling were not influenced.