993 resultados para Naval Research Laboratory (U.S.)
Resumo:
A probabilistic, nonlinear supervised learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA employs a set of several forward mapping functions that are estimated automatically from training data. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). The SMA can model ambiguous, one-to-many mappings that may yield multiple valid output hypotheses. Once learned, the mapping functions generate a set of output hypotheses for a given input via a statistical inference procedure. The SMA inference procedure incorporates an inverse mapping or feedback function in evaluating the likelihood of each of the hypothesis. Possible feedback functions include computer graphics rendering routines that can generate images for given hypotheses. The SMA employs a variant of the Expectation-Maximization algorithm for simultaneous learning of the specialized domains along with the mapping functions, and approximate strategies for inference. The framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human’s body or hands, given silhouettes from a single image. The accuracy and stability of the SMA are also tested using synthetic images of human bodies and hands, where ground truth is known.
Resumo:
This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, 1D embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures. In both systems, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.
Resumo:
BoostMap is a recently proposed method for efficient approximate nearest neighbor retrieval in arbitrary non-Euclidean spaces with computationally expensive and possibly non-metric distance measures. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. The key idea is formulating embedding construction as a machine learning task, where AdaBoost is used to combine simple, 1D embeddings into a multidimensional embedding that preserves a large amount of the proximity structure of the original space. This paper demonstrates that, using the machine learning formulation of BoostMap, we can optimize embeddings for indexing and classification, in ways that are not possible with existing alternatives for constructive embeddings, and without additional costs in retrieval time. First, we show how to construct embeddings that are query-sensitive, in the sense that they yield a different distance measure for different queries, so as to improve nearest neighbor retrieval accuracy for each query. Second, we show how to optimize embeddings for nearest neighbor classification tasks, by tuning them to approximate a parameter space distance measure, instead of the original feature-based distance measure.
Resumo:
We introduce a view-point invariant representation of moving object trajectories that can be used in video database applications. It is assumed that trajectories lie on a surface that can be locally approximated with a plane. Raw trajectory data is first locally approximated with a cubic spline via least squares fitting. For each sampled point of the obtained curve, a projective invariant feature is computed using a small number of points in its neighborhood. The resulting sequence of invariant features computed along the entire trajectory forms the view invariant descriptor of the trajectory itself. Time parametrization has been exploited to compute cross ratios without ambiguity due to point ordering. Similarity between descriptors of different trajectories is measured with a distance that takes into account the statistical properties of the cross ratio, and its symmetry with respect to the point at infinity. In experiments, an overall correct classification rate of about 95% has been obtained on a dataset of 58 trajectories of players in soccer video, and an overall correct classification rate of about 80% has been obtained on matching partial segments of trajectories collected from two overlapping views of outdoor scenes with moving people and cars.
Resumo:
The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system that classifies the preprocessed representations into 2-D view categories whose outputs arc combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence from 3-D object category nodes as multiple 2-D views are experienced. The simplest VIEWNET achieves high recognition scores without the need to explicitly code the temporal order of 2-D views in working memory. Working memories are also discussed that save memory resources by implicitly coding temporal order in terms of the relative activity of 2-D view category nodes, rather than as explicit 2-D view transitions. Variants of the VIEWNET architecture may also be used for scene understanding by using a preprocessor and classifier that can determine both What objects are in a scene and Where they are located. The present VIEWNET preprocessor includes the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and suppresses image noise. This boundary segmentation is rendered invariant under 2-D translation, rotation, and dilation by use of a log-polar transform. The invariant spectra undergo Gaussian coarse coding to further reduce noise and 3-D foreshortening effects, and to increase generalization. These compressed codes are input into the classifier, a supervised learning system based on the fuzzy ARTMAP algorithm. Fuzzy ARTMAP learns 2-D view categories that are invariant under 2-D image translation, rotation, and dilation as well as 3-D image transformations that do not cause a predictive error. Evidence from sequence of 2-D view categories converges at 3-D object nodes that generate a response invariant under changes of 2-D view. These 3-D object nodes input to a working memory that accumulates evidence over time to improve object recognition. ln the simplest working memory, each occurrence (nonoccurrence) of a 2-D view category increases (decreases) the corresponding node's activity in working memory. The maximally active node is used to predict the 3-D object. Recognition is studied with noisy and clean image using slow and fast learning. Slow learning at the fuzzy ARTMAP map field is adapted to learn the conditional probability of the 3-D object given the selected 2-D view category. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of l28x128 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex.
Resumo:
Illusory contours can be induced along directions approximately collinear to edges or approximately perpendicular to the ends of lines. Using a rating scale procedure we explored the relation between the two types of inducers by systematically varying the thickness of inducing elements to result; in varying amounts of "edge-like" or "line-like" induction. Inducers for om illusory figures consisted of concentric rings with arcs missing. Observers judged the clarity and brightness of illusory figures as the number of arcs, their thicknesses, and spacings were parametrically varied. Degree of clarity and amount of induced brightness were both found to be inverted-U functions of the number of arcs. These results mandate that any valid model of illusory contour formation must account for interference effects between parallel lines or between those neural units responsible for completion of boundary signals in directions perpendicular to the ends of thin lines. Line width was found to have an effect on both clarity and brightness, a finding inconsistent with those models which employ only completion perpendicular to inducer orientation.
Resumo:
A neural model is described of how adaptively timed reinforcement learning occurs. The adaptive timing circuit is suggested to exist in the hippocampus, and to involve convergence of dentate granule cells on CA3 pyramidal cells, and NMDA receptors. This circuit forms part of a model neural system for the coordinated control of recognition learning, reinforcement learning, and motor learning, whose properties clarify how an animal can learn to acquire a delayed reward. Behavioral and neural data are summarized in support of each processing stage of the system. The relevant anatomical sites are in thalamus, neocortex, hippocampus, hypothalamus, amygdala, and cerebellum. Cerebellar influences on motor learning are distinguished from hippocampal influences on adaptive timing of reinforcement learning. The model simulates how damage to the hippocampal formation disrupts adaptive timing, eliminates attentional blocking, and causes symptoms of medial temporal amnesia. It suggests how normal acquisition of subcortical emotional conditioning can occur after cortical ablation, even though extinction of emotional conditioning is retarded by cortical ablation. The model simulates how increasing the duration of an unconditioned stimulus increases the amplitude of emotional conditioning, but does not change adaptive timing; and how an increase in the intensity of a conditioned stimulus "speeds up the clock", but an increase in the intensity of an unconditioned stimulus does not. Computer simulations of the model fit parametric conditioning data, including a Weber law property and an inverted U property. Both primary and secondary adaptively timed conditioning are simulated, as are data concerning conditioning using multiple interstimulus intervals (ISIs), gradually or abruptly changing ISis, partial reinforcement, and multiple stimuli that lead to time-averaging of responses. Neurobiologically testable predictions are made to facilitate further tests of the model.
Resumo:
An analysis of the reset of visual cortical circuits responsible for the binding or segmentation of visual features into coherent visual forms yields a model that explains properties of visual persistence. The reset mechanisms prevent massive smearing or visual percepts in response to rapidly moving images. The model simulates relationships among psychophysical data showing inverse relations of persistence to flash luminance and duration, greaterr persistence of illusory contours than real contours, a U-shaped temporal function for persistence of illusory contours, a reduction of persistence: due to adaptation with a stimulus of like orientation, an increase or persistence due to adaptation with a stimulus of perpendicular orientation, and an increase of persistence with spatial separation of a masking stimulus. The model suggests that a combination of habituative, opponent, and endstopping mechanisms prevent smearing and limit persistence. Earlier work with the model has analyzed data about boundary formation, texture segregation, shape-from-shading, and figure-ground separation. Thus, several types of data support each model mechanism and new predictions are made.
Resumo:
Illusory contours can be induced along direction approximately collinear to edges or approximately perpendicular to the ends of lines. Using a rating scale procedure we explored the relation between the two types of inducers by systematically varying the thickness of inducing elements to result in varying amounts of "edge-like" or "line-like" induction. Inducers for our illusory figures consisted of concentric rings with arcs missing. Observers judged the clarity and brightness of illusory figures as the number of arcs, their thicknesses, and spacings were parametrically varied. Degree of clarity and amount of induced brightness were both found to be inverted-U functions of the number of arcs. These results mandate that any valid model of illusory contour formation must account for interference effects between parallel lines or between those neural units responsible for completion of boundary signals in directions perpendicular to the ends of thin lines. Line width was found to have an efFect on both clarity and brightness, a finding inconsistent with those models which employ only completion perpendicular to inducer orientation.
Resumo:
The research project takes place within the technology acceptability framework which tries to understand the use made of new technologies, and concentrates more specifically on the factors that influence multi-touch devices’ (MTD) acceptance and intention to use. Why be interested in MTD? Nowadays, this technology is used in all kinds of human activities, e.g. leisure, study or work activities (Rogowski and Saeed, 2012). However, the handling or the data entry by means of gestures on multi-touch-sensitive screen imposes a number of constraints and consequences which remain mostly unknown (Park and Han, 2013). Currently, few researches in ergonomic psychology wonder about the implications of these new human-computer interactions on task fulfillment.This research project aims to investigate the cognitive, sensori-motor and motivational processes taking place during the use of those devices. The project will analyze the influences of the use of gestures and the type of gesture used: simple or complex gestures (Lao, Heng, Zhang, Ling, and Wang, 2009), as well as the personal self-efficacy feeling in the use of MTD on task engagement, attention mechanisms and perceived disorientation (Chen, Linen, Yen, and Linn, 2011) when confronted to the use of MTD. For that purpose, the various above-mentioned concepts will be measured within a usability laboratory (U-Lab) with self-reported methods (questionnaires) and objective indicators (physiological indicators, eye tracking). Globally, the whole research aims to understand the processes at stakes, as well as advantages and inconveniences of this new technology, to favor a better compatibility and adequacy between gestures, executed tasks and MTD. The conclusions will allow some recommendations for the use of the DMT in specific contexts (e.g. learning context).
Resumo:
ABSTRACT BACKGROUND: Acute exposure to high-altitude stimulates free radical formation in lowlanders yet whether this persists during chronic exposure in healthy well-adapted and maladapted highlanders suffering from chronic mountain sickness (CMS) remains to be established. METHODS: Oxidative-nitrosative stress [ascorbate radical (A•-), electron paramagnetic resonance spectroscopy and nitrite (NO2-), ozone-based chemiluminescence] was assessed in venous blood of 25 male highlanders living at 3,600 m with (n = 13, CMS+) and without (n = 12, CMS-) CMS. Twelve age and activity-matched healthy male lowlanders were examined at sea-level and during acute hypoxia. We also measured flow-mediated dilatation (FMD), arterial stiffness (AIx-75) and carotid intima-media thickness (IMT). RESULTS: Compared to normoxic lowlanders, oxidative-nitrosative stress was moderately increased in CMS- (P < 0.05) as indicated by elevated A•- (3,191 ± 457 vs. 2,640 ± 445 arbitrary units (AU)] and lower NO2- (206 ± 55 vs. 420 ± 128 nmol/L) whereas vascular function remained preserved. This was comparable to that observed during acute hypoxia in lowlanders in whom vascular dysfunction is typically observed. In contrast, this response was markedly exaggerated in CMS+ (A•-: 3,765 ± 429 AU and NO2- : 148 ± 50 nmol/L) compared to both CMS- and lowlanders (P < 0.05). This was associated with systemic vascular dysfunction as indicated by lower (P < 0.05 vs. CMS-) FMD (4.2 ± 0.7 vs. 7.6 ± 1.7 %) and increased AIx-75 (23 ± 8 vs. 12 ± 7 %) and carotid IMT (714 ± 127 vs. 588 ± 94 µM). CONCLUSIONS: Healthy highlanders display a moderate sustained elevation in oxidative-nitrosative stress that unlike the equivalent increase evoked by acute hypoxia in healthy lowlanders, failed to affect vascular function. Its more marked elevation in patients with CMS may contribute to systemic vascular dysfunction.Clinical Trials Gov Registration # NCT011827921Neurovascular Research Laboratory, Faculty of Health, Science and Sport, University of Glamorgan, Wales, UK;2Sondes Moléculaires en Biologie et Stress Oxydant, Institut de Chimie Radicalaire, CNRS UMR 7273, Aix-Marseille University, France;3Department of Cardiology, University Hospital of Bern, Bern, Switzerland;4Institute of Clinical Physiology, CNR, Pisa, Italy;5Instituto Bolivano de Biologia de Altura, La Paz, Bolivia;6Centre for Clinical and Population Sciences, Queen's University Belfast, Belfast, Northern Ireland,7Botnar Center for Clinical Research, Hirslanden Group, Lausanne, Switzerland;8Facultad de Ciencias, Departamento de Biología, Universidad de Tarapacá, Arica, Chile and9Department of Internal Medicine, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland*Drs Bailey, Rimoldi, Scherrer and Sartori contributed equally to this workCorrespondence: Damian Miles Bailey, Neurovascular Research Laboratory, Faculty of Health, Science and Sport, University of Glamorgan, UK CF37 4AT email: dbailey1@glam.ac.uk.
Resumo:
OBJECTIVE: : To determine the influence of nebulizer types and nebulization modes on bronchodilator delivery in a mechanically ventilated pediatric lung model. DESIGN: : In vitro, laboratory study. SETTING: : Research laboratory of a university hospital. INTERVENTIONS: : Using albuterol as a marker, three nebulizer types (jet nebulizer, ultrasonic nebulizer, and vibrating-mesh nebulizer) were tested in three nebulization modes in a nonhumidified bench model mimicking the ventilatory pattern of a 10-kg infant. The amounts of albuterol deposited on the inspiratory filters (inhaled drug) at the end of the endotracheal tube, on the expiratory filters, and remaining in the nebulizers or in the ventilator circuit were determined. Particle size distribution of the nebulizers was also measured. MEASUREMENTS AND MAIN RESULTS: : The inhaled drug was 2.8% ± 0.5% for the jet nebulizer, 10.5% ± 2.3% for the ultrasonic nebulizer, and 5.4% ± 2.7% for the vibrating-mesh nebulizer in intermittent nebulization during the inspiratory phase (p < 0.01). The most efficient nebulizer was the vibrating-mesh nebulizer in continuous nebulization (13.3% ± 4.6%, p < 0.01). Depending on the nebulizers, a variable but important part of albuterol was observed as remaining in the nebulizers (jet and ultrasonic nebulizers), or being expired or lost in the ventilator circuit (all nebulizers). Only small particles (range 2.39-2.70 µm) reached the end of the endotracheal tube. CONCLUSIONS: : Important differences between nebulizer types and nebulization modes were seen for albuterol deposition at the end of the endotracheal tube in an in vitro pediatric ventilator-lung model. New aerosol devices, such as ultrasonic and vibrating-mesh nebulizers, were more efficient than the jet nebulizer.
Resumo:
Genetic testing technologies are rapidly moving from the research laboratory to the market place. Very little scholarship considers the implications of private genetic testing for a public health care system such as Canada’s. It is critical to consider how and if these tests should be marketed to, and purchased by, the public. It is also imperative to evaluate the extent to which genetic tests are or should be included in Canada’s public health care system, and the impact of allowing a two-tiered system for genetic testing. A series of threshold tests are presented as ways of clarifying whether a genetic test is morally appropriate, effective and safe, efficient and appropriate for public funding and whether private purchase poses special problems and requires further regulation. These thresholds also identify the research questions around which professional, public and policy debate must be sustained: What is a morally acceptable goal for genetic services? What are the appropriate benefits? What are the risks? When is it acceptable that services are not funded under health care? And how can the harms of private access be managed?
Resumo:
Xylanases with hydrolytic activity on xylan, one of the hemicellulosic materials present in plant cell walls, have been identified long back and the applicability of this enzyme is constantly growing. All these applications especially the pulp and paper industries require novel enzymes. There has been lot of documentation on microbial xylanases, however, none meeting all the required characteristics. The characters being sought are: higher production, higher pH and temperature optima, good stabilities under these conditions and finally the low associated cellulase and protease production. The present study analyses various facets of xylanase biotechnology giving emphasis on bacterial xylanases. Fungal xylanases are having problems like low pH values for both enzyme activity and growth. Moreover, the associated production of cellulases at significant levels make fungal xylanases less suitable for application in paper and pulp industries.Bacillus SSP-34 selected from 200 isolates was clearly having xylan catabolizing nature distinct from earlier reports. The stabilities at higher temperatures and pH values along with the optimum conditions for pH and temperature is rendering Bacillus SSP-34 xylanase more suitable than many of the previous reports for application in pulp and paper industries.Bacillus SSP-34 is an alkalophilic thertmotolerant bacteria which under optimal cultural conditions as mentioned earlier, can produce 2.5 times more xylanase than the basal medium.The 0.5% xylan concentration in the medium was found to the best carbon source resulting in 366 IU/ml of xylanase activity. This induction was subjected to catabolite repression by glucose. Xylose was a good inducer for xylanase production. The combination of yeast extract and peptone selected from several nitrogen sources resulted in the highest enzyme production (379+-0.2 IU/ml) at the optimum final concentration of 0.5%. All the cultural and nutritional parameters were compiled and comparative study showed that the modified medium resulted in xylanase activity of 506 IU/ml, 5 folds higher than the basal medium.The novel combination of purification techniques like ultrafiltraton, ammonium sulphate fractionation, DEAE Sepharose anion exchange chromatography, CM Sephadex cation exchange chromatography and Gel permeation chromatography resulted in the purified xylanase having a specific activity of 1723 U/mg protein with 33.3% yield. The enzyme was having a molecular weight of 20-22 kDa. The Km of the purified xylanase was 6.5 mg of oat spelts xylan per ml and Vmax 1233 µ mol/min/mg protein.Bacillus SSP-34 xylanase resulted in the ISO brightness increase from 41.1% to 48.5%. The hydrolytic nature of the xylanase was in the endo-form.Thus the organism Bacillus SSP-34 was having interesting biotechnological and physiological aspects. The SSP-34 xylanase having desired characters seems to be suited for application in paper and pulp industries.
Resumo:
The continually growing worldwide hazardous waste problem is receiving much attention lately. The development of cost effective, yet efficient methods of decontamination are vital to our success in solving this problem.Bioremediation using white rot fungi, a group of basidiomycetes characterized by their ability to degrade lignin by producing extracellular LiP, MnP and laccase have come to be recognized globally which is described in detail in Chapter 1.These features provide them with tremendous advantages over other micro-organisms.Chapter 2 deals with the isolation and screening of lignin degrading enzyme producing micoro-organisms from mangrove area. Marine microbes of mangrove area has great capacity to tolerate wide fluctuations of salinitie.Primary and secondary screening for lignin degrading enzyme producing halophilic microbes from mangrove area resulted in the selection of two fungal strains from among 75 bacteria and 26 fungi. The two fungi, SIP 10 and SIP ll, were identified as penicillium sp and Aspergillus sp respectively belonging to the class Ascomycetes .Specific activity of the purified LiP was 7923 U/mg protein. The purification fold was 24.07 while the yield was 18.7%. SDS PAGE of LiP showed that it was a low molecular weight protein of 29 kDa.Zymogram analysis using crystal violet dye as substrate confirmed the peroxidase nature of the purified LiP.The studies on the ability of purified LiP to decolorize different synthetic dyes was done. Among the dyes studied, crystal violet, a triphenyl methane dye was decolorized to the greatest extent.