863 resultados para Ershov hierarchy
Resumo:
Resumen: En la segunda parte de este estudio se desarrolla la instancia de unidad del saber, tomada bajo dos enfoques: la unidad de orden, que surge de la jerarquía natural de las ciencias en razón de sus distintos objetos formales, y la unidad de integración, que plantea la interacción entre las disciplinas bajo la referencia a un único objeto material. Acerca de la unidad de orden se desarrolla la teoría clásica de la subalternación, con los matices pertinentes a la diversificación actual tanto de las ciencias naturales como humanas. Con respecto a la unidad de integración, se exponen los términos fundamentales del diálogo entre la filosofía, la teología y la ciencia. Finalmente se introduce un aporte original de Maritain que él designó como epistemología existencial, en el que se intenta superar la rigidez de los objetos formales hacia una visión que se orienta a lo concreto. Como ejemplos se introducen el tema del conocimiento por connaturalidad, la ciencia como virtud y la cuestión de la filosofía cristiana.
Resumo:
A shear-lag model is used to study the mechanical properties of bone-like hierarchical materials. The relationship between the overall effective modulus and the number of hierarchy level is obtained. The result is compared with that based on the tension-shear chain model and finite element simulation, respectively. It is shown that all three models can be used to describe the mechanical behavior of the hierarchical material when the number of hierarchy levels is small. By increasing the number of hierarchy level, the shear-lag result is consistent with the finite element result. However the tension-shear chain model leads to an opposite trend. The transition point position depends on the fraction of hard phase, aspect ratio and modulus ratio of hard phase to soft phase. Further discussion is performed on the flaw tolerance size and strength of hierarchical materials based on the shear-lag analysis.
Resumo:
Background: Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60- mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality–lost values imputation, probes quality, data smoothing and intraclass variability filtering–the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).
Resumo:
[ES] Este trabajo se encuadra dentro de los estudios centrados en el análisis del comportamiento de compra del consumidor en Internet. Nos centramos en la adaptación del modelo de la jerarquía de efectos basado en su variante de la jerarquía estándar de aprendizaje para proponer teóricamente un modelo conceptual que explica cómo las creencias —i.e. diseño, velocidad de interacción, beneficios sociales y privacidad— y actitudes del consumidor hacia Internet, como medio de comunicación, pueden considerarse como determinantes plausibles de la confianza en la compra online. Asimismo, en nuestro modelo se plantea que las opiniones del consumidor respecto a la compra a distancia también deben ejercer una influencia sobre sus valoraciones de Internet, como medio de comunicación y, especialmente, de compra.
Resumo:
Aborda a relação entre o Legislativo e o Executivo na produção de políticas. Identifica os elementos do sistema de produção legislativa do Brasil (regras estruturantes, atores, recursos, instâncias de decisão e tipos de políticas produzidas) e propõe um modelo para o caso brasileiro de presidencialismo de coalizão, com base em estudos sobre a relação entre o presidente e o Congresso dos EUA e também na vasta produção existente sobre o contexto nacional. O sistema é estruturado pelo marco normativo de maior hierarquia, a Constituição, determinado historicamente, o qual privilegia a governabilidade com "accountability" e também orienta políticas segundo princípios de equidade, mas com responsabilidade orçamentária. O modelo considera que as agendas estratégicas dos atores são produto de variadas influências, incluindo o ¿status quo¿ (políticas existentes) e as demandas provenientes das conexões normativa e eleitoral. A partir desse modelo, o estudo analisa seus elementos e relações, aplicando-o a um conjunto abrangente de propostas legislativas (cerca de 21 mil proposições sobre todos os temas, apresentadas no Congresso entre 1999 e 2006, nas três vias).
Resumo:
[EN] This paper examines the syntactic ideas of Pablo Pedro Astarloa (1752-1806) as he explained in his Discursos filosóficos sobre la lengua primitiva (1805), and tries to put them in the context of the debate between rationalists and sensualists, who argued whether there is a «natural order» of words. Astarloa developed a system for accounting the word order in the primitive language of mankind (and hence in the Basque language) founded in three types of «nobleness», and in the principle that the noblest element precedes the less noble one. The first type (nobleza de origen) orders words according to their meaning. The second type (nobleza de ministerio) orders words according to the part of speech they belong to, or the semantic function they have. Finally, the third type (nobleza de mérito or de movilidad) considers the will for communication and, as a result, word order reflects the information structure. Moreover Astarloa ’s three types of nobleness are arranged in a hierarchy of superiority: movilidad > ministerio > origen. So Astarloa ’s syntax appears near to sensualists ’ conceptions on word order because it did not appeal for a fixed natural order of words; instead he proposed a variable word order based mainly on the communicative process.
Resumo:
The general equations of biomass and energy transfer for an n-species, closed ecosystem are written. It is demonstrated how in "ecological time" the parameters describing the dynamics of biomass transfer are related to the parameters of energy transfer, such as respiration, fixation, and energy content. This relationship is determinate for the straight-chain ecosystem, and a simple example is worked out. The results show how the density dependent terms in population dynamics arise naturally, and how the stable system exhibits a hierarchy in energy per unit biomass. A procedure is proposed for extending the theory to include webbed systems, and the particular difficulties involved in the extension are brought before the scientific community for discussion.
Resumo:
[ES] La documentación de este proyecto ha servido como base para la realización de los siguientes proyectos y artículos:
Resumo:
We have developed a hierarchy of target levels, designated to address sustainability, efficiency, and recovery scenarios. Targets were derived from: 1) reported catches and effort in the commercial fishery, 2) statistics from fishery-independent surveys, and 3) knowledge of the biology of blue crab. Targets that are recommended include population sizes, catches, and effort levels, as well as reference fishing mortality rates. They are intended to be conservative and risk-averse. (PDF contains 182 pages)
Resumo:
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.
At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.
In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.
In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.
In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Resumo:
The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.
We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.
Resumo:
The recently observed anomaly in photoelectron angular distributions (PADs), the disappearance of the main lobes of PADs which should be usually in the direction of laser polarization, is reinterpreted as a minimum of generalized Bessel functions in the laser-polarization direction with the theory of nonperturbative quantum electrodynamics. The reinterpretation has no artificial fitting parameters and explains more features of the experimentally observed PADs, in contrast to the existing interpretation in which the anomaly is interpreted as a quantum interference of angular momentum partial waves. Some hierarchy anomalies are predicted for further experimental observations.
Resumo:
How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?
We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.
Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
Close to equilibrium, a normal Bose or Fermi fluid can be described by an exact kinetic equation whose kernel is nonlocal in space and time. The general expression derived for the kernel is evaluated to second order in the interparticle potential. The result is a wavevector- and frequency-dependent generalization of the linear Uehling-Uhlenbeck kernel with the Born approximation cross section.
The theory is formulated in terms of second-quantized phase space operators whose equilibrium averages are the n-particle Wigner distribution functions. Convenient expressions for the commutators and anticommutators of the phase space operators are obtained. The two-particle equilibrium distribution function is analyzed in terms of momentum-dependent quantum generalizations of the classical pair distribution function h(k) and direct correlation function c(k). The kinetic equation is presented as the equation of motion of a two -particle correlation function, the phase space density-density anticommutator, and is derived by a formal closure of the quantum BBGKY hierarchy. An alternative derivation using a projection operator is also given. It is shown that the method used for approximating the kernel by a second order expansion preserves all the sum rules to the same order, and that the second-order kernel satisfies the appropriate positivity and symmetry conditions.