39 resultados para Video-based interface

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the problem of obtaining a dense reconstruction in real-time, from a live video stream. In recent years, multi-view stereo (MVS) has received considerable attention and a number of methods have been proposed. However, most methods operate under the assumption of a relatively sparse set of still images as input and unlimited computation time. Video based MVS has received less attention despite the fact that video sequences offer significant benefits in terms of usability of MVS systems. In this paper we propose a novel video based MVS algorithm that is suitable for real-time, interactive 3d modeling with a hand-held camera. The key idea is a per-pixel, probabilistic depth estimation scheme that updates posterior depth distributions with every new frame. The current implementation is capable of updating 15 million distributions/s. We evaluate the proposed method against the state-of-the-art real-time MVS method and show improvement in terms of accuracy. © 2011 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cellular networks have been widely used to support many new audio-and video-based multimedia applications. The demand for higher data rate and diverse services has driven the research on multihop cellular networks (MCNs). With its ad hoc network features, an MCN can offer many additional advantages, such as increased network throughput, scalability and coverage. However, providing ad hoc capability to MCNs is challenging as it may require proper wireless interfaces. In this article, the architecture of IEEE 802.16 network interface to provide ad hoc capability for MCNs is investigated, with its focus on the IEEE 802.16 mesh networking and scheduling. Several distributed routing algorithms based on network entry mechanism are studied and compared with a centralized routing algorithm. It is observed from the simulation results that 802.16 mesh networks have limitations on providing sufficient bandwidth for the traffic from the cellular base stations when a cellular network size is relatively large. © 2007 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a video-based system which interactively captures the geometry of a 3D object in the form of a point cloud, then recognizes and registers known objects in this point cloud in a matter of seconds (fig. 1). In order to achieve interactive speed, we exploit both efficient inference algorithms and parallel computation, often on a GPU. The system can be broken down into two distinct phases: geometry capture, and object inference. We now discuss these in further detail. © 2011 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Digital Observatory for Protected Areas (DOPA) has been developed to support the European Union’s efforts in strengthening our capacity to mobilize and use biodiversity data, information and forecasts so that they are readily accessible to policymakers, managers, experts and other users. Conceived as a set of web based services, DOPA provides a broad set of free and open source tools to assess, monitor and even forecast the state of and pressure on protected areas at local, regional and global scale. DOPA Explorer 1.0 is a web based interface available in four languages (EN, FR, ES, PT) providing simple means to explore the nearly 16,000 protected areas that are at least as large as 100 km2. Distinguishing between terrestrial, marine and mixed protected areas, DOPA Explorer 1.0 can help end users to identify those with most unique ecosystems and species, and assess the pressures they are exposed to because of human development. Recognized by the UN Convention on Biological Diversity (CBD) as a reference information system, DOPA Explorer is based on the best global data sets available and provides means to rank protected areas at the country and ecoregion levels. Inversely, DOPA Explorer indirectly highlights the protected areas for which information is incomplete. We finally invite the end-users of DOPA to engage with us through the proposed communication platforms to help improve our work to support the safeguarding of biodiversity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An implementation of a Lexical Functional Grammar (LFG) natural language front-end to a database is presented, and its capabilities demonstrated by reference to a set of queries used in the Chat-80 system. The potential of LFG for such applications is explored. Other grammars previously used for this purpose are briefly reviewed and contrasted with LFG. The basic LFG formalism is fully described, both as to its syntax and semantics, and the deficiencies of the latter for database access application shown. Other current LFG implementations are reviewed and contrasted with the LFG implementation developed here specifically for database access. The implementation described here allows a natural language interface to a specific Prolog database to be produced from a set of grammar rule and lexical specifications in an LFG-like notation. In addition to this the interface system uses a simple database description to compile metadata about the database for later use in planning the execution of queries. Extensions to LFG's semantic component are shown to be necessary to produce a satisfactory functional analysis and semantic output for querying a database. A diverse set of natural language constructs are analysed using LFG and the derivation of Prolog queries from the F-structure output of LFG is illustrated. The functional description produced from LFG is proposed as sufficient for resolving many problems of quantification and attachment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigated which evoked response component occurring in the first 800 ms after stimulus presentation was most suitable to be used in a classical P300-based brain-computer interface speller protocol. Data was acquired from 275 Magnetoencephalographic sensors in two subjects and from 61 Electroencephalographic sensors in four. To better characterize the evoked physiological responses and minimize the effect of response overlap, a 1000 ms Inter Stimulus Interval was preferred to the short (

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An increasing interest in “bringing actors back in” and gaining a nuanced understanding of their actions and interactions across a variety of strands in the management literature, has recently helped ethnography to unknown prominence in the field of organizational studies. Yet, calls remain that ethnography should “play a much more central role in the organization and management studies repertoire than it currently does” (Watson, 2011: 202). Ironically, those organizational realities that ethnographers are called to examine have at the same time become less and less amenable to ethnographic study. In this paper, we respond to these calls for innovative ethnographic methods in two ways. First, we report on the practices and ethnographic experiences of conducting a year-long team-based video ethnography of reinsurance trading in Lloyd’s of London. Second, drawing on these experiences, we propose an initial framework for systematizing new approaches to organizational ethnography and visualizing the ways in which they are ‘expanding’ ethnography as it was traditionally practiced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trehalose is a well known protector of biostructures like liposomes and proteins during freeze-drying, but still today there is a big debate regarding its mechanism of action. In previous experiments we have shown that trehalose is able to protect a non-phospholipid-based liposomal adjuvant (designated CAF01) composed of the cationic dimethyldioctadecylammonium (DDA) and trehalose 6,6-dibehenate (TDB) during freeze-drying [D. Christensen, C. Foged, I. Rosenkrands, H.M. Nielsen, P. Andersen, E.M. Agger, Trehalose preserves DDA/TDB liposomes and their adjuvant effect during freeze-drying, Biochim. Biophys. Acta, Biomembr. 1768 (2007) 2120-2129]. Furthermore it was seen that TDB is required for the stabilizing effect of trehalose. Herein, we show using the Langmuir-Blodgett technique that a high concentration of TDB present at the water-lipid interface results in a surface pressure around 67 mN/m as compared to that of pure DDA which is approximately 47 mN/m in the compressed state. This indicates that the attractive forces between the trehalose head group of TDB and water are greater than those between the quaternary ammonium head group of DDA and water. Furthermore, addition of trehalose to a DDA monolayer containing small amounts of TDB also increases the surface pressure, which is not observed in the absence of TDB. This suggests that even small amounts of trehalose groups on TDB present at the water-lipid interface associate free trehalose to the liposome surface, presumably by hydrogen bonding between the trehalose head groups of TDB and the free trehalose molecules. Hence, for CAF01 the TDB component not only stabilizes the cationic liposomes and enhances the immune response but also facilitates the cryo-/lyoprotection by trehalose through direct interaction with the head group of TDB. Furthermore the results indicate that direct interaction with liposome surfaces is necessary for trehalose to enable protection during freeze-drying.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A domain independent ICA-based watermarking method is introduced and studied by numerical simulations. This approach can be used either on images, music or video to convey a hidden message. It relies on embedding the information in a set of statistically independent sources (the independent components) as the feature space. For the experiments the medium has arbritraly chosen to be digital images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the results of in-vivo trials of a portable fiber Bragg grating based temperature profile monitoring system. The probe incorporates five Bragg gratings along a single fiber and prevents the gratings from being strained. Illumination is provided by a superluminescent diode, and a miniature CCD based spectrometer is used for demultiplexing. The CCD signal is read into a portable computer through a small A/D interface; the computer then calculates the positions of the center wavelengths of the Bragg gratings, providing a resolution of 0.2°C. Tests were carried out on rabbits undergoing hyperthermia treatment of the kidney and liver via inductive heating of metallic implants and comparison was made with a commercial Fluoroptic thermometry system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of the Integrated Services Digital Network (ISDN) led to the standardisation of the first video codecs for interpersonal video communications, followed closely by the development of standards for the compression, storage and distribution of digital video in the PC environment, mainly targeted at CD-ROM storage. At the same time the second-generation digital wireless networks, and the third-generation networks being developed, have enough bandwidth to support digital video services. The radio propagation medium is a difficult environment in which to deploy low bit error rate, real time services such as video. The video coding standards designed for ISDN and storage applications, were targeted at low bit error rate levels, orders of magnitude lower than the typical bit error rates experienced on wireless networks. This thesis is concerned with the transmission of digital, compressed video over wireless networks. It investigates the behaviour of motion compensated, hybrid interframe DPCM/DCT video coding algorithms, which form the basis of current coding algorithms, in the presence of high bit error rates commonly found on digital wireless networks. A group of video codecs, based on the ITU-T H.261 standard, are developed which are robust to the burst errors experienced on radio channels. The radio link is simulated at low level, to generate typical error files that closely model real world situations, in a Rayleigh fading environment perturbed by co-channel interference, and on frequency selective channels which introduce inter symbol interference. Typical anti-multipath techniques, such as antenna diversity, are deployed to mitigate the effects of the channel. Link layer error control techniques are also investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the results of in-vivo trials of a portable fiber Bragg grating based temperature profile monitoring system. The probe incorporates five Bragg gratings along a single fiber and prevents the gratings from being strained. Illumination is provided by a superluminescent diode, and a miniature CCD based spectrometer is used for demultiplexing. The CCD signal is read into a portable computer through a small A/D interface; the computer then calculates the positions of the center wavelengths of the Bragg gratings, providing a resolution of 0.2 °C. Tests were carried out on rabbits undergoing hyperthermia treatment of the kidney and liver via inductive heating of metallic implants and comparison was made with a commercial Fluoroptic thermometry system.