959 resultados para local processing bias


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an effective feature representation method in the context of activity recognition. Efficient and effective feature representation plays a crucial role not only in activity recognition, but also in a wide range of applications such as motion analysis, tracking, 3D scene understanding etc. In the context of activity recognition, local features are increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational requirements, their performance is still limited for real world applications due to a lack of contextual information and models not being tailored to specific activities. We propose a new activity representation framework to address the shortcomings of the popular, but simple bag-of-words approach. In our framework, first multiple instance SVM (mi-SVM) is used to identify positive features for each action category and the k-means algorithm is used to generate a codebook. Then locality-constrained linear coding is used to encode the features into the generated codebook, followed by spatio-temporal pyramid pooling to convey the spatio-temporal statistics. Finally, an SVM is used to classify the videos. Experiments carried out on two popular datasets with varying complexity demonstrate significant performance improvement over the base-line bag-of-feature method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently established moderate size free piston driven hypersonic shock tunnel HST3 along with its calibration is described here. The extreme thermodynamic conditions prevalent behind the reflected shock wave have been utilized to study the catalytic and non-catalytic reactions of shock heated test gases like Ar, N2 or O2 with different material like C60 carbon, zirconia and ceria substituted zirconia. The exposed test samples are investigated using different experimental methods. These studies show the formation of carbon nitride due to the non-catalytic interaction of shock heated nitrogen gas with C60 carbon film. On the other hand, the ZrO2 undergoes only phase transformation from cubic to monoclinic structure and Ce0.5Zr0.5O2 in fluorite cubic phase changes to pyrochlore (Ce2Zr2O7±δ) phase by releasing oxygen from the lattice due to heterogeneous catalytic surface reaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing, there is growing acknowledgement of the importance of franchising within all modern global economies. Despite this, little is understood with regards the actual impact of franchising on local economies. This research aims to reframe the contribution of franchising by considering the process of franchisation. This study employed a mixed-method approach, utilizing critical realism to facilitate an outcomes-based explanation of firm survival. The focus of the study was upon generative mechanisms that were assumed to give rise to particular events from which (pizza) firm survival was enhanced vis-à-vis all other community members. A database of 2440 firms (or in excess of 21,000 company years) combined with archival records, interviews and the researcher’s observations provided the researcher with access to the nature of interaction occurring between firms. It was found that the survival of local firms was influenced positively by the day-to-day actions of franchise operators. However, it is argued that to understand how any such advantage my fall to local independent firms, we need too better appreciate the multitude of local processes related to such industries. This research re-examines several ecological concepts with the view of enabling a clearer investigation of underlying local processes. It also represents an authentic autecological approach to the study of firms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the advent of varied types of masonry systems a comprehensive failure mechanism of masonry essential for the understanding of its behaviour is impossible to be determined from experimental testing. As masonry is predominantly used in wall structures a biaxial stress state dominates its failure mechanism. Biaxial testing will therefore be necessary for each type of masonry, which is expensive and time consuming. A computational method would be advantageous; however masonry is complex to model which requires advanced computational modelling methods. This thesis has formulated a damage mechanics inspired modelling method and has shown that the method effectively determines the failure mechanisms and deformation characteristics of masonry under biaxial states of loading.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

TWIK-related K+ channel TREK1, a background leak K+ channel, has been strongly implicated as the target of several general and local anesthetics. Here, using the whole-cell and single-channel patch-clamp technique, we investigated the effect of lidocaine, a local anesthetic, on the human (h) TREK1 channel heterologously expressed in human embryonic kidney 293 cells by an adenoviral-mediated expression system. Lidocaine, at clinical concentrations, produced reversible, concentration-dependent inhibition of hTREK1 current, with IC50 value of 180 mu M, by reducing the single-channel open probability and stabilizing the closed state. We have identified a strategically placed unique aromatic couplet (Tyr352 and Phe355) in the vicinity of the protein kinase A phosphorylation site, Ser348, in the C-terminal domain (CTD) of hTREK1, that is critical for the action of lidocaine. Furthermore, the phosphorylation state of Ser348 was found to have a regulatory role in lidocaine-mediated inhibition of hTREK1. It is interesting that we observed strong intersubunit negative cooperativity (Hill coefficient = 0.49) and half-of-sites saturation binding stoichiometry (half-reaction order) for the binding of lidocaine to hTREK1. Studies with the heterodimer of wild-type (wt)-hTREK1 and Delta 119 C-terminal deletion mutant (hTREK1(wt)-Delta 119) revealed that single CTD of hTREK1 was capable of mediating partial inhibition by lidocaine, but complete inhibition necessitates the cooperative interaction between both the CTDs upon binding of lidocaine. Based on our observations, we propose a model that explains the unique kinetics and provides a plausible paradigm for the inhibitory action of lidocaine on hTREK1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In most parts of the world, screen media workers—actors, directors, gaffers, and makeup artists—consider Hollywood to be glamorous and aspirational. If given the opportunity to work on a major studio lot, many would make the move, believing the standards of professionalism are high and the history of accomplishment is renowned. Moreover, as a global leader, Hollywood offers the chance to rub shoulders with talented counterparts and network with an elite labor force that earns top-tier pay and benefits. Yet despite this reputation, veterans say the view from inside isn’t so rosy, that working conditions have been deteriorating since the 1990s if not earlier. This grim outlook is supported by industry statistics that show the number of good jobs has been shrinking as studios outsource production to Atlanta, London, and Budapest, among others...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modified local stability scheme is applied to several two-dimensional problems—blunt body flow, regular reflection of a shock and lambda shock. The resolution of the flow features obtained by the modified local stability scheme is found to be better than that achieved by the other first order schemes and almost identical to that achieved by the second order schemes incorporating artificial viscosity. The scheme is easy for coding, consumes moderate amount of computer storage and time. The scheme can be advantageously used in place of second order schemes.