962 resultados para variable utility possibilities set
Resumo:
The utility of canonical correlation analysis (CCA) for domain adaptation (DA) in the context of multi-view head pose estimation is examined in this work. We consider the three problems studied in 1], where different DA approaches are explored to transfer head pose-related knowledge from an extensively labeled source dataset to a sparsely labeled target set, whose attributes are vastly different from the source. CCA is found to benefit DA for all the three problems, and the use of a covariance profile-based diagonality score (DS) also improves classification performance with respect to a nearest neighbor (NN) classifier.
Resumo:
A Nd:glass regenerative amplifier has been set up to generate the pumping pulse with variable pulse width for an optical parametric chirped-pulse amplification (OPCPA) laser system. Each pulse of the pulse train from a cw self-mode-locking femtosecond Ti:sapphire oscillator is stretched to approximate to300 ps at 1062 nm to be split equally and injected into a nonlinear crystal and the Nd:glass regenerative amplifier, as the chirped signal pulse train and the seed pulse train of the pumping laser system, respectively. By adjusting the cavity length of the regenerative amplifier directly, the width of amplified pulse could be varied continuously from approximate to300 ps to approximate to3 ns. The chirped signal pulse for the OPCPA laser system and the seed pulse for the pumping laser system come from the same oscillator, so that the time jitter between the signal pulse and the pumping pulse in optical parametric amplification stages could be <10 ps. (C) 2003 Society of Photo-Optical Instrumentation Engineers.
Resumo:
O presente estudo se destina a pensar como o cuidado é performado nas práticas realizadas numa escola que segue a proposta da Pedadogia Waldorf, partindo da orientação teórico-metodológica da Teoria Ator-rede (TAR). A proposta é repensar esse espaço através das relações de cuidado que são estabelecidas nas práticas, especialmente através do vínculo que se dá na ação professor / aluno. Percebe-se que os alunos são aqueles os quais se deve conduzir e os professores, aqueles que devem conduzi-los, muitas vezes sem saber ao certo a utilidade do estão ensinando. Mol (2006) propõe que o cuidado tem uma Lógica própria, intitulada por ela como Lógica do Cuidado. Esta se contrapõe à Lógica da Escolha, a qual retrata o cuidado sendo estabelecido por um especialista que indica o que deve ou não ser feito, cabendo a quem é cuidado, seguir as orientações do cuidador. No entanto, uma prática que segue a Lógica do Cuidado, parte da ideia de que, aquele que é cuidado, é tão ator quanto aquele que cuida, uma vez que aquele não é passivo em relação ao próprio cuidado ou as condições em que este se dá. Amplia-se assim a rede do cuidar, sendo considerados todos os atores que a performam, compreendendo o processo de ensino-aprendizagem enquanto um conjunto de afetações, integrando a afetividade e a cognição. O trabalho foi desenvolvido acompanhando uma escola Waldorf, para observação da rotina e de atividades onde são performadas práticas de cuidado, especialmente aquelas que seguem a Lógica do Cuidado, o que nos viabiliza pensar outro devir escola. O campo estudado contribuiu para desconstruir a forma tradicionalmente performada de cuidado, viabilizando pensar a escola enquanto um espaço que amplie as possibilidades de cuidar.
Resumo:
Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets. © 2010 Springer-Verlag.
Resumo:
It is widely reported that threshold voltage and on-state current of amorphous indium-gallium-zinc-oxide bottom-gate thin-film transistors are strongly influenced by the choice of source/drain contact metal. Electrical characterisation of thin-film transistors indicates that the electrical properties depend on the type and thickness of the metal(s) used. Electron transport mechanisms and possibilities for control of the defect state density are discussed. Pilling-Bedworth theory for metal oxidation explains the interaction between contact metal and amorphous indium-gallium-zinc-oxide, which leads to significant trap formation. Charge trapping within these states leads to variable capacitance diode-like behavior and is shown to explain the thin-film transistor operation. © 2013 AIP Publishing LLC.
Resumo:
A microsatellite locus, MFW1, originating from common carp is highly conserved in flanking nucleotides but variable in repeat length in some fishes from different families of the Cypriniformes. This orthologous locus is polymorphic in approximately 58% of the species tested in the order and is inherited by Mendelian law. It proved to be a potentially good marker in population genetics and in the cyprinid species-breeding programme in which no microsatellite markers were available.
Resumo:
We present a novel X-ray frame camera with variable exposure time that is based on double-gated micro-channel plates (MCP). Two MCPs are connected so that their channels form a Chevron-MCP structure, and four parallel micro-strip lines (MSLs) are deposited on each surface of the Chevron-MCP. The MSLs on opposing surfaces of the Chevron-MCP are oriented normal to each other and subjected to high voltage. The MSLs on the input and output surfaces are fed high voltage pulses to form a gating action. In forming two-dimensional images, modifying the width of the gating pulse serves to set exposure times (ranging from ps to ms) and modifying the delay between each gating pulse serves to set capture times. This prototype provides a new tool for high-speed X-ray imaging, and this paper presents both simulations and experimental results obtained with the camera.
Resumo:
The physics-based parameter: load/unload response ratio (LURR) was proposed to measure the proximity of a strong earthquake, which achieved good results in earthquake prediction. As LURR can be used to describe the damage degree of the focal media qualitatively, there must be a relationship between LURR and damage variable (D) which describes damaged materials quantitatively in damage mechanics. Hence, based on damage mechanics and LURR theory, taking Weibull distribution as the probability distribution function, the relationship between LURR and D is set up and analyzed. This relationship directs LURR applied in damage analysis of materials quantitatively from being qualitative earlier, which not only provides the LURR method with a more solid basis in physics, but may also give a new approach to the damage evaluation of big scale structures and prediction of engineering catastrophic failure. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
The physics-based parameter: load/unload response ratio (LURR) was proposed to measure the proximity of a strong earthquake, which achieved good results in earthquake prediction. As LURR can be used to describe the damage degree of the focal media qualitatively, there must be a relationship between LURR and damage variable (D) which describes damaged materials quantitatively in damage mechanics. Hence, based on damage mechanics and LURR theory, taking Weibull distribution as the probability distribution function, the relationship between LURR and D is set up and analyzed. This relationship directs LURR applied in damage analysis of materials quantitatively from being qualitative earlier, which not only provides the LURR method with a more solid basis in physics, but may also give a new approach to the damage evaluation of big scale structures and prediction of engineering catastrophic failure. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
The restriction of the one dimensional (1D) master equation (ME) with the mass number of the projectile-like fragment as a variable is studied, and a two-dimensional (2D) master equation with the neutron and proton numbers as independent variables is set up, and solved numerically. Our study showed that the 2D ME can describe the fusion process well in all projectile-target combinations. Therefore the possible channels to synthesize super-heavy nuclei can be studied correctly in wider possibilities. The available condition for employing 1D ME is pointed out.
Resumo:
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e. g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.
Resumo:
In chemistry for chemical analysis of a multi-component sample or quantitative structure-activity/property relationship (QSAR/QSPR) studies, variable selection is a key step. In this study, comparisons between different methods were performed. These methods include three classical methods such as forward selection, backward elimination and stepwise regression; orthogonal descriptors; leaps-and-bounds regression and genetic algorithm. Thirty-five nitrobenzenes were taken as the data set. From these structures quantum chemical parameters, topological indices and indicator variable were extracted as the descriptors for the comparisons of variable selections. The interesting results have been obtained. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Feature selection aims to determine a minimal feature subset from a problem domain while retaining a suitably high accuracy in representing the original features. Rough set theory (RST) has been used as such a tool with much success. RST enables the discovery of data dependencies and the reduction of the number of attributes contained in a dataset using the data alone, requiring no additional information. This chapter describes the fundamental ideas behind RST-based approaches and reviews related feature selection methods that build on these ideas. Extensions to the traditional rough set approach are discussed, including recent selection methods based on tolerance rough sets, variable precision rough sets and fuzzy-rough sets. Alternative search mechanisms are also highly important in rough set feature selection. The chapter includes the latest developments in this area, including RST strategies based on hill-climbing, genetic algorithms and ant colony optimization.
Resumo:
We consider challenges associated with application domains in which a large number of distributed, networked sensors must perform a sensing task repeatedly over time. For the tasks we consider, there are three significant challenges to address. First, nodes have resource constraints imposed by their finite power supply, which motivates computations that are energy-conserving. Second, for the applications we describe, the utility derived from a sensing task may vary depending on the placement and size of the set of nodes who participate, which often involves complex objective functions for nodes to target. Finally, nodes must attempt to realize these global objectives with only local information. We present a model for such applications, in which we define appropriate global objectives based on utility functions and specify a cost model for energy consumption. Then, for an important class of utility functions, we present distributed algorithms which attempt to maximize the utility derived from the sensor network over its lifetime. The algorithms and experimental results we present enable nodes to adaptively change their roles over time and use dynamic reconfiguration of routes to load balance energy consumption in the network.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.