940 resultados para texture-defined (second-order) information
Resumo:
This letter presents pseudolikelihood equations for the estimation of the Potts Markov random field model parameter on higher order neighborhood systems. The derived equation for second-order systems is a significantly reduced version of a recent result found in the literature (from 67 to 22 terms). Also, with the proposed method, a completely original equation for Potts model parameter estimation in third-order systems was obtained. These equations allow the modeling of less restrictive contextual systems for a large number of applications in a computationally feasible way. Experiments with both simulated and real remote sensing images provided good results.
Resumo:
We consider the problem of blocking response surface designs when the block sizes are prespecified to control variation efficiently and the treatment set is chosen independently of the block structure. We show how the loss of information due to blocking is related to scores defined by Mead and present an interchange algorithm based on scores to improve a given blocked design. Examples illustrating the performance of the algorithm are given and some comparisons with other designs are made. (C) 2000 Elsevier B.V. B.V. All rights reserved.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
A inversão de momentos de fonte gravimétrica tridimensional é analisada em duas situações. Na primeira se admite conhecer apenas a anomalia. Na segunda se admite conhecer, além da anomalia, informação a priori sobre o corpo anômalo. Sem usar informação a priori, mostramos que é possível determinar univocamente todo momento, ou combinação linear de momentos, cujo núcleo polinomial seja função apenas das coordenadas Cartesianas que definem o plano de medida e que tenha Laplaciano nulo. Além disso, mostramos que nenhum momento cujo núcleo polinomial tenha Laplaciano não nulo pode ser determinado. Por outro lado, informação a priori é implicitamente introduzida se o método de inversão de momentos se baseia na aproximação da anomalia pela série truncada obtida de sua expansão em multipolos. Dado um centro de expansão qualquer, o truncamento da série impõe uma condição de regularização sobre as superfícies equipotenciais do corpo anômalo, que permite estimar univocamente os momentos e combinações lineares de momentos que são os coeficientes das funções-bases da expansão em multipolos. Assim, uma distribuição de massa equivalente à real é postulada, sendo o critério de equivalência especificado pela condição de ajuste entre os campos observado e calculado com a série truncada em momentos de uma ordem máxima pré-estabelecida. Os momentos da distribuição equivalente de massa foram identificados como a solução estacionária de um sistema de equações diferenciais lineares de 1a. ordem, para a qual se asseguram unicidade e estabilidade assintótica. Para a série retendo momentos até 2a. ordem, é implicitamente admitido que o corpo anômalo seja convexo e tenha volume finito, que ele esteja suficientemente distante do plano de medida e que a sua distribuição espacial de massa apresente três planos ortogonais de simetria. O método de inversão de momentos baseado na série truncada (IMT) é adaptado para o caso magnético. Para este caso, mostramos que, para assegurar unicidade e estabilidade assintótica, é suficiente pressupor, além da condição de regularização, a condição de que a magnetização total tenha direção e sentido constantes, embora desconhecidos. O método IMT baseado na série de 2a. ordem (IMT2) é aplicado a anomalias gravimétricas e magnéticas tridimensionais sintéticas. Mostramos que se a fonte satisfaz as condições exigidas, boas estimativas da sua massa ou vetor momento de dipolo anômalo total, da posição de seu centro de massa ou de momento de dipolo e das direções de seus três eixos principais são obtidas de maneira estável. O método IMT2 pode falhar parcialmente quando a fonte está próxima do plano de medida ou quando a anomalia tem efeitos localizados e fortes de um corpo pequeno e raso e se tenta estimar os parâmetros de um corpo grande e profundo. Definimos por falha parcial a situação em que algumas das estimativas obtidas podem não ser boas aproximações dos valores verdadeiros. Nas duas situações acima descritas, a profundidade do centro da fonte (maior) e as direções de seus eixos principais podem ser erroneamente estimadas, embora que a massa ou vetor momento de dipolo anômalo total e a projeção do centro desta fonte no plano de medida ainda sejam bem estimados. Se a direção de magnetização total não for constante, o método IMT2 pode fornecer estimativas erradas das direções dos eixos principais (mesmo se a fonte estiver distante do plano de medida), embora que os demais parâmetros sejam bem estimados. O método IMT2 pode falhar completamente se a fonte não tiver volume finito. Definimos por falha completa a situação em que qualquer estimativa obtida pode não ser boa aproximação do valor verdadeiro. O método IMT2 é aplicado a dados reais gravimétricos e magnéticos. No caso gravimétrico, utilizamos uma anomalia situada no estado da Bahia, que se supõe ser causada por um batólito de granito. Com base nos resultados, sugerimos que as massas graníticas geradoras desta anomalia tenham sido estiradas na direção NNW e adelgaçadas na direção vertical durante o evento compressivo que causou a orogênese do Sistema de Dobramentos do Espinhaço. Além disso, estimamos que a profundidade do centro de massa da fonte geradora é cerca de 20 km. No caso magnético, utilizamos a anomalia de um monte submarino situado no Golfo da Guiné. Com base nos resultados, estimamos que o paleopolo magnético do monte submarino tem latitude 50°48'S e longitude 74°54'E e sugerimos que não exista contraste de magnetização expressivo abaixo da base do monte submarino.
Resumo:
Este trabalho teve como objetivo geral desenvolver uma metodologia sistemática para a inversão de dados de reflexão sísmica em arranjo ponto-médio-comum (PMC), partindo do caso 1D de variação vertical de velocidade e espessura que permite a obtenção de modelos de velocidades intervalares, vint,n, as espessuras intervalares, zn, e as velocidades média-quadrática, vRMS,n, em seções PMC individualizadas. Uma consequência disso é a transformação direta destes valores do tempo para profundidade. Como contribuição a análise de velocidade, foram desenvolvidos dois métodos para atacar o problema baseado na estimativa de velocidade intervalar. O primeiro método foi baseado na marcação manual em seções PMC, e inversão por ajuste de curvas no sentido dos quadrados-mínimos. O segundo método foi baseado na otimização da função semblance para se obter uma marcação automática. A metodologia combinou dois tipos de otimização: um Método Global (Método Price ou Simplex), e um Método Local (Gradiente de Segunda Ordem ou Conjugado), submetidos a informação à priori e vínculos. A marcação de eventos na seção tempo-distância faz parte dos processos de inversão, e os pontos marcados constituem os dados de entrada juntamente com as informações à priori do modelo a ser ajustado. A marcação deve, por princípio, evitar eventos que representem múltiplas, difrações e interseções, e numa seção pode ser feita mais de 50 marcações de eventos, enquanto que num mapa semblance não se consegue marcar mais de 10 eventos de reflexão. A aplicação deste trabalho é voltada a dados sísmicos de bacias sedimentares em ambientes marinhos para se obter uma distribuição de velocidades para a subsuperfície, onde o modelo plano-horizontal é aplicado em seções PMC individualizadas, e cuja solução pode ser usada como um modelo inicial em processos posteriores. Os dados reais da Bacia Marinha usados neste trabalho foram levantados pela PETROBRAS em 1985, e a linha sísmica selecionada foi a de número L5519 da Bacia do Camamu, e o PMC apresentado é a de número 237. A linha é composta de 1098 pontos de tiro, com arranjo unilateraldireito. O intervalo de amostragem é 4 ms. O espaçamento entre os geofones é 13,34 m com o primeiro geofone localizado a 300 m da fonte. O espaçamento entre as fontes é de 26,68 m. Como conclusão geral, o método de estimativa de velocidade intervalar apresentada neste trabalho fica como suporte alternativo ao processo de análise de velocidades, onde se faz necessário um controle sobre a sequência de inversão dos PMCs ao longo da linha sísmica para que a solução possa ser usada como modelo inicial ao imageamento, e posterior inversão tomográfica. Como etapas futuras, podemos propor trabalhos voltados direto e especificamente a análise de velocidade sísmica estendendo o caso 2D de otimização do semblance ao caso 3D, estender o presente estudo para o caso baseado na teoria do raio imagem com a finalidade de produzir um mapa continuo de velocidades para toda a seção sísmica de forma automática.
Resumo:
Background: The CAMbrella coordination action was funded within the Framework Programme 7. Its aim is to provide a research roadmap for clinical and epidemiological research for complementary and alternative medicine (CAM) that is appropriate for the health needs of European citizens and acceptable to their national research institutes and healthcare providers in both public and private sectors. One major issue in the European research agenda is the demographic change and its impact on health care. Our vision for 2020 is that there is an evidence base that enables European citizens to make informed decisions about CAM, both positive and negative. This roadmap proposes a strategic research agenda for the field of CAM designed to address future European health care challenges. This roadmap is based on the results of CAMbrella’s several work packages, literature reviews and expert discussions including a consensus meeting. Methods: We first conducted a systematic literature review on key issues in clinical and epidemiological research in CAM to identify the general concepts, methods and the strengths and weaknesses of current CAM research. These findings were discussed in a workshop (Castellaro, Italy, September 7–9th 2011) with international CAM experts and strategic and methodological recommendations were defined in order to improve the rigor and relevance of CAM research. These recommendations provide the basis for the research roadmap, which was subsequently discussed in a consensus conference (Järna, Sweden, May 9–11th 2012) with all CAMbrella members and the CAMbrella advisory board. The roadmap was revised after this discussion in CAMbrella Work Package (WP) 7 and finally approved by CAMbrella’s scientific steering committee on September 26th 2012. Results: Our main findings show that CAM is very heterogenous in terms of definitions and legal regulations between the European countries. In addition, citizens’ needs and attitudes towards CAM as well as the use and provision of CAM differ significantly between countries. In terms of research methodology, there was consensus that CAM researchers should make use of all the commonly accepted scientific research methods and employ those with utmost diligence combined in a mixed methods framework. Conclusions: We propose 6 core areas of research that should be investigated to achieve a robust knowledge base and to allow stakeholders to make informed decisions. These are: Research into the prevalence of CAM in Europe: Reviews show that we do not know enough about the circumstances in which CAM is used by Europeans. To enable a common European strategic approach, a clear picture of current use is of the utmost importance. Research into differences regarding citizens’ attitudes and needs towards CAM: Citizens are the driver for CAM utilization. Their needs and views on CAM are a key priority, and their interests must be investigated and addressed in future CAM research. Research into safety of CAM: Safety is a key issue for European citizens. CAM is considered safe, but reliable data is scarce although urgently needed in order to assess the risk and cost-benefit ratio of CAM. Research into the comparative effectiveness of CAM: Everybody needs to know in what situation CAM is a reasonable choice. Therefore, we recommend a clear emphasis on concurrent evaluation of the overall effectiveness of CAM as an additional or alternative treatment strategy in real-world settings. Research into effects of context and meaning: The impact of effects of context and meaning on the outcome of CAM treatments must be investigated; it is likely that they are significant. Research into different models of CAM health care integration: There are different models of CAM being integrated into conventional medicine throughout Europe, each with their respective strengths and limitations. These models should be described and concurrently evaluated; innovative models of CAM provision in health care systems should be one focus for CAM research. We also propose a methodological framework for CAM research. We consider that a framework of mixed methodological approaches is likely to yield the most useful information. In this model, all available research strategies including comparative effectiveness research utilising quantitative and qualitative methods should be considered to enable us to secure the greatest density of knowledge possible. Stakeholders, such as citizens, patients and providers, should be involved in every stage of developing the specific and relevant research questions, study design and the assurance of real-world relevance for the research. Furthermore, structural and sufficient financial support for research into CAM is needed to strengthen CAM research capacity if we wish to understand why it remains so popular within the EU. In order to consider employing CAM as part of the solution to the health care, health creation and self-care challenges we face by 2020, it is vital to obtain a robust picture of CAM use and reliable information about its cost, safety and effectiveness in real-world settings. We need to consider the availability, accessibility and affordability of CAM. We need to engage in research excellence and utilise comparative effectiveness approaches and mixed methods to obtain this data. Our recommendations are both strategic and methodological. They are presented for the consideration of researchers and funders while being designed to answer the important and implicit questions posed by EU citizens currently using CAM in apparently increasing numbers. We propose that the EU actively supports an EUwide strategic approach that facilitates the development of CAM research. This could be achieved in the first instance through funding a European CAM coordinating research office dedicated to foster systematic communication between EU governments, public, charitable and industry funders as well as researchers, citizens and other stakeholders. The aim of this office would be to coordinate research strategy developments and research funding opportunities, as well as to document and disseminate international research activities in this field. With the aim to develop sustainability as second step, a European Centre for CAM should be established that takes over the monitoring and further development of a coordinated research strategy for CAM, as well as it should have funds that can be awarded to foster high quality and robust independent research with a focus on citizens health needs and pan-European collaboration. We wish to establish a solid funding for CAM research to adequately inform health care and health creation decision-making throughout the EU. This centre would ensure that our vision of a common, strategic and scientifically rigorous approach to CAM research becomes our legacy and Europe’s reality. We are confident that our recommendations will serve these essential goals for EU citizens.
Resumo:
The main goal of the bilingual and monolingual participation of the MIRACLE team in CLEF 2004 was to test the effect of combination approaches on information retrieval. The starting point was a set of basic components: stemming, transformation, filtering, generation of n-grams, weighting and relevance feedback. Some of these basic components were used in different combinations and order of application for document indexing and for query processing. A second order combination was also tested, mainly by averaging or selective combination of the documents retrieved by different approaches for a particular query.
Resumo:
This paper outlines the problems found in the parallelization of SPH (Smoothed Particle Hydrodynamics) algorithms using Graphics Processing Units. Different results of some parallel GPU implementations in terms of the speed-up and the scalability compared to the CPU sequential codes are shown. The most problematic stage in the GPU-SPH algorithms is the one responsible for locating neighboring particles and building the vectors where this information is stored, since these specific algorithms raise many dificulties for a data-level parallelization. Because of the fact that the neighbor location using linked lists does not show enough data-level parallelism, two new approaches have been pro- posed to minimize bank conflicts in the writing and subsequent reading of the neighbor lists. The first strategy proposes an efficient coordination between CPU-GPU, using GPU algorithms for those stages that allow a straight forward parallelization, and sequential CPU algorithms for those instructions that involve some kind of vector reduction. This coordination provides a relatively orderly reading of the neighbor lists in the interactions stage, achieving a speed-up factor of x47 in this stage. However, since the construction of the neighbor lists is quite expensive, it is achieved an overall speed-up of x41. The second strategy seeks to maximize the use of the GPU in the neighbor's location process by executing a specific vector sorting algorithm that allows some data-level parallelism. Al- though this strategy has succeeded in improving the speed-up on the stage of neighboring location, the global speed-up on the interactions stage falls, due to inefficient reading of the neighbor vectors. Some changes to these strategies are proposed, aimed at maximizing the computational load of the GPU and using the GPU texture-units, in order to reach the maximum speed-up for such codes. Different practical applications have been added to the mentioned GPU codes. First, the classical dam-break problem is studied. Second, the wave impact of the sloshing fluid contained in LNG vessel tanks is also simulated as a practical example of particle methods
Resumo:
La presente investigación tiene como objetivo el desarrollo de una metodología que favorezca la innovación en las empresas a través de la actividad directiva, analizando a su vez, su influencia a nivel macro, en los sistemas de innovación, en las políticas de innovación y en el capital intelectual y a nivel micro, en la innovación, en el desempeño y en el clima organizacional. Se estima importante realizar un estudio sobre este tema debido a que la innovación se considera un pilar crítico para el desarrollo social a través de la competitividad de las empresas, así como, una fuente importante de ventaja competitiva. Existe abundante literatura sobre la influencia de la innovación en la gestión empresarial y el papel que el liderazgo desempeña en términos generales. Sin embargo, la literatura presenta diversos estilos de liderazgo sin mostrar una línea consistente de interrelación entre ellos, por lo que finalmente no existe una relación sólida entre el liderazgo, la gestión empresarial y la innovación. Este hecho se debe, como se muestra en la tesis, a que la literatura analiza las organizaciones y el liderazgo desde una perspectiva sociológica u organizacional, y otra desde la perspectiva psicológica sin aportar una línea de articulación entre ambas. Es decir, la literatura analiza el comportamiento organizacional pero no su causa. A lo largo de la tesis se van desarrollando diferentes líneas de trabajo que se convierten en aportaciones empíricas y académicas. Así, una de las aportaciones de la tesis es la sustitución de la figura del líder como persona, por la de un directivo con una doble función; por un lado, la función de liderazgo cuyo objetivo es generar cambio y por el otro, la función de gestionar el día a día o desempeño. Sustituir la figura del líder por una doble funcionalidad directiva facilita la comprensión del concepto liderazgo, lo que permite a su vez, establecer estrategias para su desarrollo, haciendo una realidad el que el liderazgo puede ser aprendido. Este resultado constituye la primera aportación de la tesis. Así mismo, a través de un exhaustivo análisis de la literatura, se desarrolla una propuesta de liderazgo integrado de acuerdo con el modelo Stuart-Kotze, el cual se describe también ampliamente. Encontrar un modelo único de liderazgo supone la piedra angular para el desarrollo de la metodología. Esta propuesta de liderazgo integrado da lugar a la segunda aportación de la tesis. Del mismo modo, se realiza un estudio en profundidad de la perspectiva psicológica de las organizaciones desarrollando el constructo Miedo al Error (ME) que resulta ser un rasgo de la personalidad existente en todos los seres humanos y que presenta una influencia negativa tanto en el desempeño, como en la innovación empresarial. Este resultado permite identificar cuales son las verdaderas barreras para el ejercicio del liderazgo, señalando que la disminución del ME debe ser considerada como una competencia de la Inteligencia Emocional a ser desarrollada por los directivos. Este resultado constituye la tercera aportación de la tesis. Una vez desarrollado el modelo de gestión empresarial expuesto, se procede a su validación, analizando la relación entre los constructos que definen el modelo de gestión: el desempeño, la innovación y el ME. Para identificar las influencias o relaciones de causalidad que subyacen entre los constructos, se utilizó la técnica del modelo de ecuaciones estructurales (SEM). La población objeto de estudio estuvo constituida por 350 profesionales con responsabilidad directiva, procedentes de empresas del sector servicios repartidas por toda la geografía española. Como fuente primaria de recolección de información se utilizó el cuestionario desarrollado por Stuart-Kotze M-CPI (Momentum Continuous Performance Improvement). En primer lugar se procedió a evaluar las propiedades psicométricas del modelo de medida, llevándose a cabo un análisis factorial exploratorio (AFE) y un análisis factorial confirmatorio (AFC) de segundo orden. Los resultados obtenidos ponen de manifiesto que el constructo desempeño (D) viene determinado por dos dimensiones, (DOP), desempeño orientado hacia la planificación y (DORT), desempeño orientado hacia la realización de la tarea. Es decir, la muestra de directivos no percibe que la planificación en el día a día y la realización de la tarea estén articuladas. Posteriormente se procede a realizar el contraste del modelo a través del método de ecuaciones estructurales. Los resultados muestran que la relación de influencia de la dimensión DOP no es significativa, por lo que el constructo D queda representado únicamente por la dimensión DORT. Los resultados de la investigación proporcionan conclusiones e hipótesis para futuras investigaciones. Si bien la muestra de directivos realiza un plan estratégico, éste no se tiene en cuenta en el día a día. Este hecho podría explicar el alto grado de administración por crisis tan frecuente en la empresa española. A su vez, el ME presenta una influencia negativa en la innovación, lo que concuerda con la literatura. Al respecto, considerar el ME como un rasgo de la personalidad, presente tanto en directivos como en colaboradores, facilita la comprensión de las barreras de la organización hacia la comunicación abierta a la vez, que una dirección de trabajo para la mejora de la capacidad innovadora de la organización. Por último, los resultados establecen la existencia de una relación causal entre el desempeño diario y la innovación. Con respecto a este segundo resultado y analizando los comportamientos que identifican el constructo D surgen también varias conclusiones e hipótesis para futuras investigaciones. Los resultados ponen de manifiesto que la muestra de directivos genera iniciativas de cambio con la finalidad de que el trabajo diario salga adelante según los estándares de calidad definidos. Sin embargo, estas iniciativas sólo proceden de los directivos, sin participación alguna de los colaboradores, los cuales son sólo responsables de la implementación produciéndose la consiguiente desmotivación y pérdida de oportunidades. Esta conclusión pone de manifiesto que la innovación de las empresas de la muestra sucede para garantizar la eficiencia de los procesos existentes, pero en ningún caso surge de la iniciativa de buscar una mejor eficacia empresarial. Este hecho plantea un origen doble de la innovación en los procesos. La innovación proactiva que buscaría la mejora de la eficacia de la organización y una innovación de carácter reactiva que buscaría la salvaguarda de la eficiencia. Quizás sea esta la causa del gap existente entre la innovación en España y la innovación de los países que ocupan los primeros puestos en el ranking de producción de innovación lo que constituye un importante punto de partida para una investigación futura. ABSTRACT This research aims to develop a methodology that supports innovation in companies through the managers’ activity, analysing in turn its influence at the macro level: innovation systems, innovation policies and Intellectual capital and at the micro level: innovation itself, performance and organizational climate. It is considered important to conduct a study on this subject due to the fact that innovation is considered a critical pillar for the development and future of the enterprise and an important source of competitive advantage. There is abundant literature about the influence of innovation in business management and the role that leadership plays in general terms. However, the literature presents various styles of leadership without showing a consistent relationship among them, so finally there is not a strong relationship among leadership, business management and innovation. As shown in the thesis, this is due to the fact that the literature analyses organizations and leadership from a sociological or organizational perspective and from a psychological perspective, without providing a hinge line between the two. That is, the existing literature discusses organizational behaviour but not its cause. Throughout the thesis, different lines of work that become empirical and academic contributions have been developed. Thus, one of the contributions of the thesis is replacing the figure of the leader as a person, by a manager with a dual function. Firstly, we have the leadership role which aims to generate change and, on the other hand, the function to manage the day-to-day task or performance. Replacing the figure of the leader by a dual managerial functionality facilitates the understanding of the leadership concept, allowing in turn, to establish development strategies and making true that leadership can be learned. This outcome is the first contribution of the thesis. Likewise, through a comprehensive literature review, an integrated leadership proposal is developed, according to the Kotze model, which is also described widely. Finding a specific leadership model represents the cornerstone for the development of the methodology. This integrated leadership proposal leads to the second contribution of the thesis. Similarly, an in-depth study was conducted about the psychological perspective of the organizations disclosing the construct Fear of Failure. This construct is a personality trait that exists in all human beings and has a negative influence on both performance and business innovation. This outcome allows identifying which are the real barriers to the exercise of leadership, noting that the decrease in fear of failure must be considered as an Emotional Intelligence competence to be developed by managers. This outcome represents the third contribution of the thesis. Once a business management model has been developed, we proceed to its validation by analysing the relationship among the model constructs: management, innovation and fear of failure. To identify the influence or causal relationships underlying the constructs, a structural equation model (SEM) technique was used. The study population consisted of 350 professionals with managerial responsibility, from companies in the services sector scattered throughout the Spanish geography. As a primary source for gathering information a questionnaire developed by Kotze M-CPI (Continuous Performance Improvement Momentum) was used. First we proceeded to evaluate the psychometric properties of the measurement model, carrying out an exploratory factorial analysis (EFA) and a confirmatory factorial analysis (CFA) of second order. The results show that the performance construct D is determined by two-dimensions (DOP: performance oriented to planning) and (DORT: aiming at the realization of the task). That is, the sample of managers does not perceive that planning and the daily task are articulated. Then, we proceeded to make the contrast of the model through a structural equation model SEM. The results show that the influence of the DOP dimension is not significant, so that only the DORT dimension finally represents the construct D. The research outcomes provide conclusions and hypotheses for future research. Although the managers in the sample develop a strategic plan, it seems that managers do not take it into account in their daily tasks. This could explain the high degree of crisis management so prevalent in the Spanish companies. In turn, the fear of failure has a negative influence on innovation, consistent with the literature. In this regard, the fear of failure is considered as a personality trait, present in both managers and employees, which enables the understanding of organizational barriers to open communication and provides a direction to improve the organization’s innovative capacity as well. Finally, the results establish a causal relationship between daily performance and innovation. Regarding this second outcome and analysing the behaviours that identify the construct D, several conclusions and hypotheses for future research arise as well. The results show that the managers in the sample show initiatives of change in order to make everyday work go ahead, according to defined quality standards. However, these initiatives only come from managers without any participation of coworkers, which are only responsible for the implementation, and this produces discouragement and loss of opportunities. This finding shows that the innovation by the companies in the sample happens to guarantee the efficiency of existing processes, but do not arise from an initiative that seeks better business efficacy. This raises two sources of innovation in processes. The first source would be a proactive innovation that would seek improved organizational efficacy. The second one is a reactive innovation that would seek to safeguard efficiency. Perhaps this is the cause of the existing gap between the innovation activity in Spain and the innovation activity in those countries that occupy the top positions in the ranking of innovation outcomes. The Spanish companies seek process efficiency and the top innovators business efficacy. This is an important starting point for future research.
Resumo:
A first-order Lagrangian L ∇ variationally equivalent to the second-order Einstein- Hilbert Lagrangian is introduced. Such a Lagrangian depends on a symmetric linear connection, but the dependence is covariant under diffeomorphisms. The variational problem defined by L ∇ is proved to be regular and its Hamiltonian formulation is studied, including its covariant Hamiltonian attached to ∇ .
Resumo:
At the level of the cochlear nucleus (CN), the auditory pathway divides into several parallel circuits, each of which provides a different representation of the acoustic signal. Here, the representation of the power spectrum of an acoustic signal is analyzed for two CN principal cells—chopper neurons of the ventral CN and type IV neurons of the dorsal CN. The analysis is based on a weighting function model that relates the discharge rate of a neuron to first- and second-order transformations of the power spectrum. In chopper neurons, the transformation of spectral level into rate is a linear (i.e., first-order) or nearly linear function. This transformation is a predominantly excitatory process involving multiple frequency components, centered in a narrow frequency range about best frequency, that usually are processed independently of each other. In contrast, type IV neurons encode spectral information linearly only near threshold. At higher stimulus levels, these neurons are strongly inhibited by spectral notches, a behavior that cannot be explained by level transformations of first- or second-order. Type IV weighting functions reveal complex excitatory and inhibitory interactions that involve frequency components spanning a wider range than that seen in choppers. These findings suggest that chopper and type IV neurons form parallel pathways of spectral information transmission that are governed by two different mechanisms. Although choppers use a predominantly linear mechanism to transmit tonotopic representations of spectra, type IV neurons use highly nonlinear processes to signal the presence of wide-band spectral features.
Resumo:
1/2-meter resolution 1:5,000 orthophoto image of the Boston region from April 2001. This datalayer is a subset (covering only the Boston region) of the Massachusetts statewide orthophoto image series available from MassGIS. It consists of 23 orthophoto quads mosaicked together (MassGIS orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 233906, 233910, 237890, 237894, 237898, 237902, 237906, 237910, 241890, 241894, 241898, 241902, 245898, 245902). These medium resolution true color images are considered the new "basemap" for the Commonwealth by MassGIS and the Executive Office of Environmental Affairs (EOEA). MassGIS/EOEA and the Massachusetts Highway Department jointly funded the project. The photography for the mainland was captured in April 2001 when deciduous trees were mostly bare and the ground was generally free of snow. The geographic extent of this dataset is the same as that of the MassGIS dataset: Boston, Massachusetts Region LIDAR First Return Elevation Data, 2002 [see cross references].
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
What is the minimal size quantum circuit required to exactly implement a specified n-qubit unitary operation, U, without the use of ancilla qubits? We show that a lower bound on the minimal size is provided by the length of the minimal geodesic between U and the identity, I, where length is defined by a suitable Finsler metric on the manifold SU(2(n)). The geodesic curves on these manifolds have the striking property that once an initial position and velocity are set, the remainder of the geodesic is completely determined by a second order differential equation known as the geodesic equation. This is in contrast with the usual case in circuit design, either classical or quantum, where being given part of an optimal circuit does not obviously assist in the design of the rest of the circuit. Geodesic analysis thus offers a potentially powerful approach to the problem of proving quantum circuit lower bounds. In this paper we construct several Finsler metrics whose minimal length geodesics provide lower bounds on quantum circuit size. For each Finsler metric we give a procedure to compute the corresponding geodesic equation. We also construct a large class of solutions to the geodesic equation, which we call Pauli geodesics, since they arise from isometries generated by the Pauli group. For any unitary U diagonal in the computational basis, we show that: (a) provided the minimal length geodesic is unique, it must be a Pauli geodesic; (b) finding the length of the minimal Pauli geodesic passing from I to U is equivalent to solving an exponential size instance of the closest vector in a lattice problem (CVP); and (c) all but a doubly exponentially small fraction of such unitaries have minimal Pauli geodesics of exponential length.
Resumo:
We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer neural networks, using the methods of statistical mechanics. We first consider on-line Newton's method, which is known to provide optimal asymptotic performance. We determine the asymptotic generalization error decay for a soft committee machine, which is shown to compare favourably with the result for standard gradient descent. Matrix momentum provides a practical approximation to this method by allowing an efficient inversion of the Hessian. We consider an idealized matrix momentum algorithm which requires access to the Hessian and find close correspondence with the dynamics of on-line Newton's method. In practice, the Hessian will not be known on-line and we therefore consider matrix momentum using a single example approximation to the Hessian. In this case good asymptotic performance may still be achieved, but the algorithm is now sensitive to parameter choice because of noise in the Hessian estimate. On-line Newton's method is not appropriate during the transient learning phase, since a suboptimal unstable fixed point of the gradient descent dynamics becomes stable for this algorithm. A principled alternative is to use Amari's natural gradient learning algorithm and we show how this method provides a significant reduction in learning time when compared to gradient descent, while retaining the asymptotic performance of on-line Newton's method.