971 resultados para SQL query equivalence
Resumo:
El uso de las TIC se ha masificado dentro del ámbito del turismo convirtiéndose en herramienta fundamental y un aliado para llegar a conquistar turistas para los diferentes destinos que se promocionan a través de aplicaciones moviles o de website.Cada vez más las entidades turísticas o las empresas recurren a las tecnologías de la información, en particular Internet, como medio para promocionar sus productos y servicios turísticos. Estas nuevas tecnologías han cambiado el concepto de vida de personas en cuanto a la consulta de precio y rapidez de información de los diferentes servicios turísticos.En Valledupar se debe: aprovechar la tendencia mundial al rescate de los valores auténticos, el medio ambiente y las comunidades indígenas a través de diferentes modalidades de turismo: Ecoturismo, etnoturismo, agroturismo, cultural, religioso, compras, aventura, salud, deportivo, ciudad capital. Se debe ampliar el conocimiento del territorio municipal y de los valores autóctonos. Mediante uso de software libre y de código abierto se pueden crear soluciones para fortalecer la promoción del sector turístico de la ciudad de Valledupar.
Resumo:
This thesis studies the properties and usability of operators called t-norms, t-conorms, uninorms, as well as many valued implications and equivalences. Into these operators, weights and a generalized mean are embedded for aggregation, and they are used for comparison tasks and for this reason they are referred to as comparison measures. The thesis illustrates how these operators can be weighted with a differential evolution and aggregated with a generalized mean, and the kinds of measures of comparison that can be achieved from this procedure. New operators suitable for comparison measures are suggested. These operators are combination measures based on the use of t-norms and t-conorms, the generalized 3_-uninorm and pseudo equivalence measures based on S-type implications. The empirical part of this thesis demonstrates how these new comparison measures work in the field of classification, for example, in the classification of medical data. The second application area is from the field of sports medicine and it represents an expert system for defining an athlete's aerobic and anaerobic thresholds. The core of this thesis offers definitions for comparison measures and illustrates that there is no actual difference in the results achieved in comparison tasks, by the use of comparison measures based on distance, versus comparison measures based on many valued logical structures. The approach has been highly practical in this thesis and all usage of the measures has been validated mainly by practical testing. In general, many different types of operators suitable for comparison tasks have been presented in fuzzy logic literature and there has been little or no experimental work with these operators.
Resumo:
Este proyecto consiste en diseñar e implementar un sistema de información alojado en una base de datos Oracle, con el fin de dar respuesta al proyecto Big Data, cuyo objetivo es cruzar los datos de salud y los datos de actividad física de los ciudadanos europeos.
Resumo:
Este proyecto tiene como finalidad el análisis, diseño y desarrollo de una aplicación web en base a una revista científica. La mecánica de la revista consiste en la propia revisión y edición de los artículos por parte de los usuarios dados de alta en el aplicativo. Los usuarios que componen la web, puede tener diferentes perfiles: autor, editor, revisor, técnico editor y administrador. Cada usuario tiene unas tareas específicas en el flujo de revisión y edición. El proyecto se ha desarrollado sobre la plataforma ASP.NET y SQL para el diseño de la base de datos.
Resumo:
Disseny i implementació d’aplicació web destinada a consulta d’informació urbana en els municipis de Mallorca. Localització de punts d’interès. Implementació pilot en el municipi de Santa Eugènia, Mallorca
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
We extend Deligne's weight filtration to the integer cohomology of complex analytic spaces (endowed with an equivalence class of compactifications). In general, the weight filtration that we obtain is not part of a mixed Hodge structure. Our purely geometric proof is based on cubical descent for resolution of singularities and Poincaré-Verdier duality. Using similar techniques, we introduce the singularity filtration on the cohomology of compactificable analytic spaces. This is a new and natural analytic invariant which does not depend on the equivalence class of compactifications and is related to the weight filtration.
Resumo:
El presente documento describe en detalle el proyecto de desarrollo de una aplicación web de gestión, llamada 'FacturaBien'. Esta aplicación permite realizar todas las tareas asociadas a la gestión administrativa de la actividad empresarial de una manera eficiente y cómoda, automatizando un gran número de tareas.
Resumo:
In this paper we propose an approach to homotopical algebra where the basic ingredient is a category with two classes of distinguished morphisms: strong and weak equivalences. These data determine the cofibrant objects by an extension property analogous to the classical lifting property of projective modules. We define a Cartan-Eilenberg category as a category with strong and weak equivalences such that there is an equivalence of categories between its localisation with respect to weak equivalences and the relative localisation of the subcategory of cofibrant objects with respect to strong equivalences. This equivalence of categories allows us to extend the classical theory of derived additive functors to this non additive setting. The main examples include Quillen model categories and categories of functors defined on a category endowed with a cotriple (comonad) and taking values on a category of complexes of an abelian category. In the latter case there are examples in which the class of strong equivalences is not determined by a homotopy relation. Among other applications of our theory, we establish a very general acyclic models theorem.
Resumo:
Interlaboratorial programs are conducted for a number of purposes: to identify problems related to the calibration of instruments, to assess the degree of equivalence of analytical results among several laboratories, to attribute quantity values and its uncertainties in the development of a certified reference material and to verify the performance of laboratories as in proficiency testing, a key quality assurance technique, which is sometimes used in conjunction with accreditation. Several statistics tools are employed to assess the analytical results of laboratories participating in an intercomparison program. Among them are the z-score technique, the elypse of confidence and the Grubbs and Cochran test. This work presents the experience in coordinating an intercomparison exercise in order to determine Ca, Al, Fe, Ti and Mn, as impurities in samples of silicon metal of chemical grade prepared as a candidate for reference material.
Resumo:
The work of Newton exerted a profound influence on the development of science. In chemistry this newtonian influence was present in Query 31 of Newton's Optics. However, the incursion of Newton's thought into chemistry brought upon the chemists an epistemological question, that of the nature of their discipline. Would chemistry be a discipline in its own right, or simply a branch of physics? In this work we present the newtonian program for chemistry, as well as the reaction of traditional chemists to it. We conclude by proposing that Lavoisier carried through a synthesis between newtonian methodology and the singularity of traditional chemistry.
Resumo:
The set of initial conditions for which the pseudoclassical evolution algorithm (and minimality conservation) is verified for Hamiltonians of degrees N (N>2) is explicitly determined through a class of restrictions for the corresponding classical trajectories, and it is proved to be at most denumerable. Thus these algorithms are verified if and only if the system is quadratic except for a set of measure zero. The possibility of time-dependent a-equivalence classes is studied and its physical interpretation is presented. The implied equivalence of the pseudoclassical and Ehrenfest algorithms and their relationship with minimality conservation is discussed in detail. Also, the explicit derivation of the general unitary operator which linearly transforms minimum-uncertainty states leads to the derivation, among others, of operators with a general geometrical interpretation in phase space, such as rotations (parity, Fourier).
Resumo:
The main subject of this article is to show the parallelism betwen the Ellingham and Van't Hoff diagrams. The first one is a graphic representation of the changes in the standard Gibbs free energy (deltarGtheta) as a function of T and was introduced by Ellingham in 1944, in order to study metallurgic processes involving oxides and sulphides. On the other hand, the Van't Hoff diagram is a representation of the function ln K versus (1/T). The equivalence between both diagrams is easily demonstrated, making simple mathematical manipulations. In order to show the parallelism between both diagrams, they are presented briefly and two examples are discussed. The comparison of the both diagrams surely will be helpful to students and teachers in their learning and teaching activities, and will certainly enrich important aspects of chemical thermodynamics.
Resumo:
Fleurbaey and Maniquet have proposed the criteria of conditional equality and of egalitarian equivalence to assess the equity among individuals in an ordinal setting. Empirical applications are rare and only partially consistent with their framework. We propose a new empirical approach that relies on individual preferences, is consistent with the ordinal criteria and enables to compare them with the cardinal criteria. We estimate a utility function that incorporates individual heterogeneous preferences, obtain ordinal measures of well-being and apply conditional equality and egalitarian equivalence. We then propose two cardinal measures of well-being, that are comparable with the ordinal model, to compute Roemer’s and Van de gaer’s criteria. Finally we compare the characteristics of the worst-off displayed by each criterion. We apply this model to a sample of US micro data and obtain that about 18% of the worst-off are not common to all criteria.
Resumo:
The art of rhetoric may be defined as changing other people's minds (opinions, beliefs) without providing them new information. One tech- nique heavily used by rhetoric employs analogies. Using analogies, one may draw the listener's attention to similarities between cases and to re-organize existing information in a way that highlights certain reg- ularities. In this paper we offer two models of analogies, discuss their theoretical equivalence, and show that finding good analogies is a com- putationally hard problem.