945 resultados para File processing (Computer science)
Resumo:
In this paper we present a solution to the problem of action and gesture recognition using sparse representations. The dictionary is modelled as a simple concatenation of features computed for each action or gesture class from the training data, and test data is classified by finding sparse representation of the test video features over this dictionary. Our method does not impose any explicit training procedure on the dictionary. We experiment our model with two kinds of features, by projecting (i) Gait Energy Images (GEIs) and (ii) Motion-descriptors, to a lower dimension using Random projection. Experiments have shown 100% recognition rate on standard datasets and are compared to the results obtained with widely used SVM classifier.
Resumo:
Limited in motivation and cognitive ability to process the increasing amount of information on their Newsfeed, users apply heuristic processing to form their attitudes. Rather than extensively analysing the content, they increasingly rely on heuristic cues – such as the amount of comments and likes as well as the level of relationship with the “poster” – to process the incoming information. In the paper we explore what impact these heuristic cues have on the affective and cognitive attitude of users towards the posts on their Newsfeed. We conduct a survey on based on a Facebook application that allows users to evaluate Newsfeed posts in real time. Applying two distinct panel-regression methods we report robust results that indicate that there is a certain relationship primacy effect when users are processing information: only if the level of relationship with the “poster” is low, the impact of comments and likes on the attitude is considered, whereby likes trigger positive, whereas comments – negative evaluations.
Resumo:
The central aim of our project is to explore the handling of e-mail request from customers by tourist organisations and to explain the perceived behaviour. For this purpose, we designed a qualitative empirical study which consists basically of two stages. The first stage consists of a black-box test where we employ the setting of a qualitative experiment to measure the behaviour of the organisation to an e-mail request. The second stage comprises a with-box test where we want to look into the tourist organizations and analyse the relevant information processes. This study should give as some insight in the internal processing of e-mail requests and thus should help to explain the reactions that we registered.
Resumo:
Time-based localization techniques such as multilateration are favoured for positioning to wide-band signals. Applying the same techniques with narrow-band signals such as GSM is not so trivial. The process is challenged by the needs of synchronization accuracy and timestamp resolution both in the nanoseconds range. We propose approaches to deal with both challenges. On the one hand, we introduce a method to eliminate the negative effect of synchronization offset on time measurements. On the other hand, we propose timestamps with nanoseconds accuracy by using timing information from the signal processing chain. For a set of experiments, ranging from sub-urban to indoor environments, we show that our proposed approaches are able to improve the localization accuracy of TDOA approaches by several factors. We are even able to demonstrate errors as small as 10 meters for outdoor settings with narrow-band signals.
Resumo:
Currently, dramatic changes are happening in the IS development industry. The incumbent system developers (hubs) are embracing partnerships with less well established companies (spokes), acting in specific niches. This paper seeks to establish a better understanding of the motives for this strategy. Relying on existing work on strategic alliance formation, it is argued that partnering is particularly attractive, if these small companies possess certain capabilities that are difficult to obtain through other arrangements than partnering. Again drawing on the literature, three categories of capabilities are identified: the capability to innovate within their niche, the capability to provide a specific functionality that can be integrated with the incumbents’ systems, and the capability to address novel markets. These factors are analyzed through a case study. The case represents a market leader in the global IS development industry, which fosters a network of smaller partner firms. The study reveals that temporal dynamics between the identified factors are playing a dominant role in these networks. A cyclical partnership model is developed that attempts to explain the life cycle of a partnership within such a network.
Resumo:
This paper addresses the novel notion of offering a radio access network as a service. Its components may be instantiated on general purpose platforms with pooled resources (both radio and hardware ones) dimensioned on-demand, elastically and following the pay-per-use principle. A novel architecture is proposed that supports this concept. The architecture's success is in its modularity, well-defined functional elements and clean separation between operational and control functions. By moving much processing traditionally located in hardware for computation in the cloud, it allows the optimisation of hardware utilization and reduction of deployment and operation costs. It enables operators to upgrade their network as well as quickly deploy and adapt resources to demand. Also, new players may easily enter the market, permitting a virtual network operator to provide connectivity to its users.
Resumo:
In free viewpoint applications, the images are captured by an array of cameras that acquire a scene of interest from different perspectives. Any intermediate viewpoint not included in the camera array can be virtually synthesized by the decoder, at a quality that depends on the distance between the virtual view and the camera views available at decoder. Hence, it is beneficial for any user to receive camera views that are close to each other for synthesis. This is however not always feasible in bandwidth-limited overlay networks, where every node may ask for different camera views. In this work, we propose an optimized delivery strategy for free viewpoint streaming over overlay networks. We introduce the concept of layered quality-of-experience (QoE), which describes the level of interactivity offered to clients. Based on these levels of QoE, camera views are organized into layered subsets. These subsets are then delivered to clients through a prioritized network coding streaming scheme, which accommodates for the network and clients heterogeneity and effectively exploit the resources of the overlay network. Simulation results show that, in a scenario with limited bandwidth or channel reliability, the proposed method outperforms baseline network coding approaches, where the different levels of QoE are not taken into account in the delivery strategy optimization.
Resumo:
We review our recent work on protein-ligand interactions in vitamin transporters of the Sec-14-like protein. Our studies focused on the cellular-retinaldehyde binding protein (CRALBP) and the alpha-tocopherol transfer protein (alpha-TTP). CRALBP is responsible for mobilisation and photo-protection of short-chain cis-retinoids in the dim-light visual cycle or rod photoreceptors. alpha-TTP is a key protein responsible for selection and retention of RRR-alpha-tocopherol, the most active isoform of vitamin E in superior animals. Our simulation studies evidence how subtle chemical variations in the substrate can lead to significant distortion in the structure of the complex, and how these changes can either lead to new protein function, or be used to model engineered protein variants with tailored properties. Finally, we show how integration of computational and experimental results can contribute in synergy to the understanding of fundamental processes at the biomolecular scale.
Resumo:
BACKGROUND: To investigate if non-rigid image-registration reduces motion artifacts in triggered and non-triggered diffusion tensor imaging (DTI) of native kidneys. A secondary aim was to determine, if improvements through registration allow for omitting respiratory-triggering. METHODS: Twenty volunteers underwent coronal DTI of the kidneys with nine b-values (10-700 s/mm2 ) at 3 Tesla. Image-registration was performed using a multimodal nonrigid registration algorithm. Data processing yielded the apparent diffusion coefficient (ADC), the contribution of perfusion (FP ), and the fractional anisotropy (FA). For comparison of the data stability, the root mean square error (RMSE) of the fitting and the standard deviations within the regions of interest (SDROI ) were evaluated. RESULTS: RMSEs decreased significantly after registration for triggered and also for non-triggered scans (P < 0.05). SDROI for ADC, FA, and FP were significantly lower after registration in both medulla and cortex of triggered scans (P < 0.01). Similarly the SDROI of FA and FP decreased significantly in non-triggered scans after registration (P < 0.05). RMSEs were significantly lower in triggered than in non-triggered scans, both with and without registration (P < 0.05). CONCLUSION: Respiratory motion correction by registration of individual echo-planar images leads to clearly reduced signal variations in renal DTI for both triggered and particularly non-triggered scans. Secondarily, the results suggest that respiratory-triggering still seems advantageous.J. Magn. Reson. Imaging 2014. (c) 2014 Wiley Periodicals, Inc.
Resumo:
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
Resumo:
In this paper, we describe dynamic unicast to increase communication efficiency in opportunistic Information-centric networks. The approach is based on broadcast requests to quickly find content and dynamically creating unicast links to content sources without the need of neighbor discovery. The links are kept temporarily as long as they deliver content and are quickly removed otherwise. Evaluations in mobile networks show that this approach maintains ICN flexibility to support seamless mobile communication and achieves up to 56.6% shorter transmission times compared to broadcast in case of multiple concurrent requesters. Apart from that, dynamic unicast unburdens listener nodes from processing unwanted content resulting in lower processing overhead and power consumption at these nodes. The approach can be easily included into existing ICN architectures using only available data structures.
Resumo:
Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Our method to evaluate the synchronization accuracy is inspired by signal processing algorithms and relies on fine grain time information. The method is able to calculate the clock offset and skew between devices with nanosecond accuracy in real time. It was implemented using software defined radio technology. We demonstrate that GPS-based synchronization suffers from remaining clock offset in the range of a few hundred of nanoseconds but the clock skew is negligible. Finally, we determine a corresponding lower bound on the expected positioning error.
Resumo:
Femoroacetabular impingement (FAI) before or after Periacetabular Osteotomy (PAO) is surprisingly frequent and surgeons need to be aware of the risk preoperatively and be able to avoid it intraoperatively. In this paper we present a novel computer assisted planning and navigation system for PAO with impingement analysis and range of motion (ROM) optimization. Our system starts with a fully automatic detection of the acetabular rim, which allows for quantifying the acetabular morphology with parameters such as acetabular version, inclination and femoral head coverage ratio for a computer assisted diagnosis and planning. The planned situation was optimized with impingement simulation by balancing acetabuar coverage with ROM. Intra-operatively navigation was conducted until the optimized planning situation was achieved. Our experimental results demonstrated: 1) The fully automated acetabular rim detection was validated with accuracy 1.1 ± 0.7mm; 2) The optimized PAO planning improved ROM significantly compared to that without ROM optimization; 3) By comparing the pre-operatively planned situation and the intra-operatively achieved situation, sub-degree accuracy was achieved for all directions.
Resumo:
The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.