804 resultados para 2005-07-BS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accurate head tilt detection has a large potential to aid people with disabilities in the use of human-computer interfaces and provide universal access to communication software. We show how it can be utilized to tab through links on a web page or control a video game with head motions. It may also be useful as a correction method for currently available video-based assistive technology that requires upright facial poses. Few of the existing computer vision methods that detect head rotations in and out of the image plane with reasonable accuracy can operate within the context of a real-time communication interface because the computational expense that they incur is too great. Our method uses a variety of metrics to obtain a robust head tilt estimate without incurring the computational cost of previous methods. Our system runs in real time on a computer with a 2.53 GHz processor, 256 MB of RAM and an inexpensive webcam, using only 55% of the processor cycles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A learning based framework is proposed for estimating human body pose from a single image. Given a differentiable function that maps from pose space to image feature space, the goal is to invert the process: estimate the pose given only image features. The inversion is an ill-posed problem as the inverse mapping is a one to many process. Hence multiple solutions exist, and it is desirable to restrict the solution space to a smaller subset of feasible solutions. For example, not all human body poses are feasible due to anthropometric constraints. Since the space of feasible solutions may not admit a closed form description, the proposed framework seeks to exploit machine learning techniques to learn an approximation that is smoothly parameterized over such a space. One such technique is Gaussian Process Latent Variable Modelling. Scaled conjugate gradient is then used find the best matching pose in the space of feasible solutions when given an input image. The formulation allows easy incorporation of various constraints, e.g. temporal consistency and anthropometric constraints. The performance of the proposed approach is evaluated in the task of upper-body pose estimation from silhouettes and compared with the Specialized Mapping Architecture. The estimation accuracy of the Specialized Mapping Architecture is at least one standard deviation worse than the proposed approach in the experiments with synthetic data. In experiments with real video of humans performing gestures, the proposed approach produces qualitatively better estimation results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although cooperation generally increases the amount of resources available to a community of nodes, thus improving individual and collective performance, it also allows for the appearance of potential mistreatment problems through the exposition of one node’s resources to others. We study such concerns by considering a group of independent, rational, self-aware nodes that cooperate using on-line caching algorithms, where the exposed resource is the storage of each node. Motivated by content networking applications – including web caching, CDNs, and P2P – this paper extends our previous work on the off-line version of the problem, which was limited to object replication and was conducted under a game-theoretic framework. We identify and investigate two causes of mistreatment: (1) cache state interactions (due to the cooperative servicing of requests) and (2) the adoption of a common scheme for cache replacement/redirection/admission policies. Using analytic models, numerical solutions of these models, as well as simulation experiments, we show that online cooperation schemes using caching are fairly robust to mistreatment caused by state interactions. When this becomes possible, the interaction through the exchange of miss-streams has to be very intense, making it feasible for the mistreated nodes to detect and react to the exploitation. This robustness ceases to exist when nodes fetch and store objects in response to remote requests, i.e., when they operate as Level-2 caches (or proxies) for other nodes. Regarding mistreatment due to a common scheme, we show that this can easily take place when the “outlier” characteristics of some of the nodes get overlooked. This finding underscores the importance of allowing cooperative caching nodes the flexibility of choosing from a diverse set of schemes to fit the peculiarities of individual nodes. To that end, we outline an emulation-based framework for the development of mistreatment-resilient distributed selfish caching schemes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A difficulty in lung image registration is accounting for changes in the size of the lungs due to inspiration. We propose two methods for computing a uniform scale parameter for use in lung image registration that account for size change. A scaled rigid-body transformation allows analysis of corresponding lung CT scans taken at different times and can serve as a good low-order transformation to initialize non-rigid registration approaches. Two different features are used to compute the scale parameter. The first method uses lung surfaces. The second uses lung volumes. Both approaches are computationally inexpensive and improve the alignment of lung images over rigid registration. The two methods produce different scale parameters and may highlight different functional information about the lungs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently adopt a set of local policies that specify which routes it accepts and advertises from/to other networks, as well as which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme (APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially-conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts (policy conflict-avoidance vs. -control mode), each AS dynamically adjusts its own path preferences—increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the substability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, routing load, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Particle filtering is a popular method used in systems for tracking human body pose in video. One key difficulty in using particle filtering is caused by the curse of dimensionality: generally a very large number of particles is required to adequately approximate the underlying pose distribution in a high-dimensional state space. Although the number of degrees of freedom in the human body is quite large, in reality, the subset of allowable configurations in state space is generally restricted by human biomechanics, and the trajectories in this allowable subspace tend to be smooth. Therefore, a framework is proposed to learn a low-dimensional representation of the high-dimensional human poses state space. This mapping can be learned using a Gaussian Process Latent Variable Model (GPLVM) framework. One important advantage of the GPLVM framework is that both the mapping to, and mapping from the embedded space are smooth; this facilitates sampling in the low-dimensional space, and samples generated in the low-dimensional embedded space are easily mapped back into the original highdimensional space. Moreover, human body poses that are similar in the original space tend to be mapped close to each other in the embedded space; this property can be exploited when sampling in the embedded space. The proposed framework is tested in tracking 2D human body pose using a Scaled Prismatic Model. Experiments on real life video sequences demonstrate the strength of the approach. In comparison with the Multiple Hypothesis Tracking and the standard Condensation algorithm, the proposed algorithm is able to maintain tracking reliably throughout the long test sequences. It also handles singularity and self occlusion robustly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Weak references are references that do not prevent the object they point to from being garbage collected. Most realistic languages, including Java, SML/NJ, and OCaml to name a few, have some facility for programming with weak references. Weak references are used in implementing idioms like memoizing functions and hash-consing in order to avoid potential memory leaks. However, the semantics of weak references in many languages are not clearly specified. Without a formal semantics for weak references it becomes impossible to prove the correctness of implementations making use of this feature. Previous work by Hallett and Kfoury extends λgc, a language for modeling garbage collection, to λweak, a similar language with weak references. Using this previously formalized semantics for weak references, we consider two issues related to well-behavedness of programs. Firstly, we provide a new, simpler proof of the well-behavedness of the syntactically restricted fragment of λweak defined previously. Secondly, we give a natural semantic criterion for well-behavedness much broader than the syntactic restriction, which is useful as principle for programming with weak references. Furthermore we extend the result, proved in previously of λgc, which allows one to use type-inference to collect some reachable objects that are never used. We prove that this result holds of our language, and we extend this result to allow the collection of weakly-referenced reachable garbage without incurring the computational overhead sometimes associated with collecting weak bindings (e.g. the need to recompute a memoized function). Lastly we use extend the semantic framework to model the key/value weak references found in Haskell and we prove the Haskell is semantics equivalent to a simpler semantics due to the lack of side-effects in our language.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a general presumption in the literature and among policymakers that immigrant remittances play the same role in economic development as foreign direct investment and other capital flows, but this is an open question. We develop a model of remittances based on the economics of the family that implies that remittances are not profit-driven, but are compensatory transfers, and should have a negative correlation with GDP growth. This is in contrast to the positive correlation of profit-driven capital flows with GDP growth. We test this implication of our model using a new panel data set on remittances and find a robust negative correlation between remittances and GDP growth. This indicates that remittances may not be intended to serve as a source of capital for economic development. © 2005 International Monetary Fund.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Exposing individuals to an isolated component (a prime) of a prior event alleviates its forgetting. Two experiments with 120 human infants between 3 and 18 months of age determined the minimum duration of a prime that can reactivate a forgotten memory and how long the reactivated memory persists. Infants learned an operant task, forgot it, were exposed to the prime, and later were tested for renewed retention. In Experiment 1, the minimum duration of an effective prime decreased logarithmically with age, but was always longer than the duration of a mere glance. In Experiment 2, the reactivated memory was forgotten twice as fast after a minimum-duration prime as after a full-length one, irrespective of priming delay and infant age. These data reveal that the minimum effective prime duration psychophysically equates the accessibility of forgotten memories. We conclude that priming is perceptually based with effects that are organized on a ratio (log) scale.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of concentrating semi-volatile aerosols using a water-condensation technology was investigated using the Versatile Aerosol Concentration Enrichment System (VACES) and the Aerodyne Aerosol Mass Spectrometer (AMS) during measurements of ambient aerosol in Pittsburgh, PA. It was found that the shape of the sulfate mass-weighed size distribution was approximately preserved during passage through the concentrator for all the experiments performed, with a mass enhancement factor of about 10-20 depending on the experiment. The size distributions of organics, ammonium and nitrate were preserved on a relatively clean day (sulfate concentration around 7μg/m3), while during more polluted conditions the concentration of these compounds, especially nitrate, was increased at small sizes after passage through the concentrator. The amount of the extra material, however, is rather small in these experiments: between 2.4% and 7.5% of the final concentrated PM mass is due to "artifact" condensation. An analysis of thermodynamic processes in the concentrator indicates that the extra particle material detected can be explained by redistribution of gas-phase material to the aerosol phase in the concentrator. The analysis shows that the condensation of extra material is expected to be larger for water-soluble semi-volatile material, such as nitrate, which agrees with the observations. The analysis also shows that artifact formation of nitrate will be more pronounced in ammonia-limited conditions and virtually undetectable in ammonia-rich conditions. © 2004 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Attempts were made to measure the fraction of elemental carbon (EC) in ultrafine aerosol by modifying an Ambient Carbonaceous Particulate Monitor (ACPM, R&P 5400). The main modification consisted in placing a quartz filter in one of the sampling lines of this dual-channel instrument. With the filter all aerosol and EC contained in it is collected, while in the other line of the instrument the standard impactor samples only particles larger than 0.14 μm. The fraction of EC in particles smaller than 0.14 μm is derived from the difference in concentration as measured via the two sampling lines. Measurements with the modified instrument were made at a suburban site in Amsterdam, The Netherlands. An apparent adsorption artefact, which could not be eliminated by the use of denuders, precluded meaningful evaluation of the data for total carbon. Blanks in the measurements of EC were negligible and the EC data were hence further evaluated. We found that the concentration of EC obtained via the channel with the impactor was systematically lower than that in the filter-line. The average ratio of the concentrations was close to 0.6, which indicates that approximately 40% of the EC was in particles smaller than 0.14 μm. Alternative explanations for the difference in the concentration in the two sampling lines could be excluded, such as a difference in the extent of oxidation. This should be a function of loading, which is not the case. Another reason for the difference could be that less material is collected by the impactor due to rebound, but such bounce of aerosol is very unlikely in The Netherlands due to co-deposition of abundant deliquesced and thus viscous ammonium compounds. The conclusion is that a further modification to assess the true fraction of ultrafine EC, by installing an impactor with cut-off diameter at 0.1 μm, would be worth pursuing. © 2005 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Most individuals infected with Mycobacterium tuberculosis do not develop tuberculosis (TB) and can be regarded as being protected by an appropriate immune response to the infection. The characterization of the immune responses of individuals with latent TB may thus be helpful in the definition of correlates of protection and the development of new vaccine strategies. The highly protective antigen heparin-binding hemagglutinin (HBHA) induces strong interferon (IFN)- gamma responses during latent, but not active, TB. Because of the recently recognized importance of CD8(+) T lymphocytes in anti-TB immunity, we characterized the CD8(+) T lymphocyte responses to HBHA in subjects with latent TB. RESULTS: HBHA-specific CD8(+) T lymphocytes expressed memory cell markers and synthesized HBHA-specific IFN- gamma .They also restricted mycobacterial growth and expressed cytotoxicity by a granule-dependent mechanism. This activity was associated with the intracellular expression of HBHA-induced perforin. Surprisingly, the perforin-producing CD8(+) T lymphocytes were distinct from the IFN- gamma -producing CD8(+) T lymphocytes. CONCLUSION: During latent TB, the HBHA-specific CD8(+) T lymphocyte population expresses all 3 effector functions associated with CD8(+) T lymphocyte-mediated protective immune mechanisms, which supports the notion that HBHA may be protective in humans and suggests that markers of HBHA-specific CD8(+) T lymphocyte responses may be useful in the monitoring of protection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The growth of computer power allows the solution of complex problems related to compressible flow, which is an important class of problems in modern day CFD. Over the last 15 years or so, many review works on CFD have been published. This book concerns both mathematical and numerical methods for compressible flow. In particular, it provides a clear cut introduction as well as in depth treatment of modern numerical methods in CFD. This book is organised in two parts. The first part consists of Chapters 1 and 2, and is mainly devoted to theoretical discussions and results. Chapter 1 concerns fundamental physical concepts and theoretical results in gas dynamics. Chapter 2 describes the basic mathematical theory of compressible flow using the inviscid Euler equations and the viscous Navier–Stokes equations. Existence and uniqueness results are also included. The second part consists of modern numerical methods for the Euler and Navier–Stokes equations. Chapter 3 is devoted entirely to the finite volume method for the numerical solution of the Euler equations and covers fundamental concepts such as order of numerical schemes, stability and high-order schemes. The finite volume method is illustrated for 1-D as well as multidimensional Euler equations. Chapter 4 covers the theory of the finite element method and its application to compressible flow. A section is devoted to the combined finite volume–finite element method, and its background theory is also included. Throughout the book numerous examples have been included to demonstrate the numerical methods. The book provides a good insight into the numerical schemes, theoretical analysis, and validation of test problems. It is a very useful reference for applied mathematicians, numerical analysts, and practice engineers. It is also an important reference for postgraduate researchers in the field of scientific computing and CFD.