984 resultados para Validated Computations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper describes an extension of the cognitive architecture DUAL with a model of visual attention and perception. The goal of this attempt is to account for the construction and the categorization of object and scene representations derived from visual stimuli in the TextWorld microdomain. Low-level parallel computations are combined with an active serial deployment of visual attention enabling the construction of abstract symbolic representations. A limited-capacity short-term visual store holding information across attention shifts forms the core of the model interfacing between the low-level representation of the stimulus and DUAL’s semantic memory. The model is validated by comparing the results of a simulation with real data from an eye movement experiment with human subjects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Floods represent the most devastating natural hazards in the world, affecting more people and causing more property damage than any other natural phenomena. One of the important problems associated with flood monitoring is flood extent extraction from satellite imagery, since it is impractical to acquire the flood area through field observations. This paper presents a method to flood extent extraction from synthetic-aperture radar (SAR) images that is based on intelligent computations. In particular, we apply artificial neural networks, self-organizing Kohonen’s maps (SOMs), for SAR image segmentation and classification. We tested our approach to process data from three different satellite sensors: ERS-2/SAR (during flooding on Tisza river, Ukraine and Hungary, 2001), ENVISAT/ASAR WSM (Wide Swath Mode) and RADARSAT-1 (during flooding on Huaihe river, China, 2007). Obtained results showed the efficiency of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have previously described ProxiMAX, a technology that enables the fabrication of precise, combinatorial gene libraries via codon-by-codon saturation mutagenesis. ProxiMAX was originally performed using manual, enzymatic transfer of codons via blunt-end ligation. Here we present Colibra™: an automated, proprietary version of ProxiMAX used specifically for antibody library generation, in which double-codon hexamers are transferred during the saturation cycling process. The reduction in process complexity, resulting library quality and an unprecedented saturation of up to 24 contiguous codons are described. Utility of the method is demonstrated via fabrication of complementarity determining regions (CDR) in antibody fragment libraries and next generation sequencing (NGS) analysis of their quality and diversity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel association rule mining algorithm is composed, using the unit cube chain decomposition structures introduced in [HAN, 1966; TON, 1976]. [HAN, 1966] established the chain split theory. [TON, 1976] invented an excellent chain computation framework which brings chain split into the practical domain. We integrate these technologies around the rule mining procedures. Effectiveness is related to the intention of low complexity of rules mined. Complexity of the procedure composed is complementary to the known Apriori algorithm which is defacto standard in rule mining area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss some main points of computer-assisted proofs based on reliable numerical computations. Such so-called self-validating numerical methods in combination with exact symbolic manipulations result in very powerful mathematical software tools. These tools allow proving mathematical statements (existence of a fixed point, of a solution of an ODE, of a zero of a continuous function, of a global minimum within a given range, etc.) using a digital computer. To validate the assertions of the underlying theorems fast finite precision arithmetic is used. The results are absolutely rigorous. To demonstrate the power of reliable symbolic-numeric computations we investigate in some details the verification of very long periodic orbits of chaotic dynamical systems. The verification is done directly in Maple, e.g. using the Maple Power Tool intpakX or, more efficiently, using the C++ class library C-XSC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Никола Вълчанов, Тодорка Терзиева, Владимир Шкуртов, Антон Илиев - Една от основните области на приложения на компютърната информатика е автоматизирането на математическите изчисления. Информационните системи покриват различни области като счетоводство, електронно обучение/тестване, симулационни среди и т. н. Те работят с изчислителни библиотеки, които са специфични за обхвата на системата. Въпреки, че такива системи са перфектни и работят безпогрешно, ако не се поддържат остаряват. В тази работа описваме механизъм, който използва динамично библиотеките за изчисления и взема решение по време на изпълнение (интелигентно или интерактивно) за това как и кога те да се използват. Целта на тази статия е представяне на архитектура за системи, управлявани от изчисления. Тя се фокусира върху ползите от използването на правилните шаблони за дизайн с цел да се осигури разширяемост и намаляване на сложността.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Михаил М. Константинов, Петко Х. Петков - Разгледани са възможните катастрофални ефекти от неправилното използване на крайна машинна аритметика с плаваща точка. За съжаление, тази тема не винаги се разбира достатъчно добре от студентите по приложна и изчислителна математика, като положението в инженерните и икономическите специалности в никакъв случай не е по-добро. За преодоляване на този образователен пропуск тук сме разгледали главните виновници за загубата на точност при числените компютърни пресмятания. Надяваме се, че представените резултати ще помогнат на студентите и лекторите за по-добро разбиране и съответно за избягване на основните фактори, които могат да разрушат точността при компютърните числени пресмятания. Последното не е маловажно – числените катастрофи понякога стават истински, с големи щети и човешки жертви.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives Dietary fibre (DF) is one of the components of diet that strongly contributes to health improvements, particularly on the gastrointestinal system. Hence, this work intended to evaluate the relations between some sociodemographic variables such as age, gender, level of education, living environment or country on the levels of knowledge about dietary fibre (KADF), its sources and its effects on human health, using a validated scale. Study design The present study was a cross-sectional study. Methods A methodological study was conducted with 6010 participants, residing in 10 countries from different continents (Europe, America, Africa). The instrument was a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. The instrument was used to validate a scale (KADF) which model was used in the present work to identify the best predictors of knowledge. The statistical tools used were as follows: basic descriptive statistics, decision trees, inferential analysis (t-test for independent samples with Levene test and one-way ANOVA with multiple comparisons post hoc tests). Results The results showed that the best predictor for the three types of knowledge evaluated (about DF, about its sources and about its effects on human health) was always the country, meaning that the social, cultural and/or political conditions greatly determine the level of knowledge. On the other hand, the tests also showed that statistically significant differences were encountered regarding the three types of knowledge for all sociodemographic variables evaluated: age, gender, level of education, living environment and country. Conclusions The results showed that to improve the level of knowledge the actions planned should not be delineated in general as to reach all sectors of the populations, and that in addressing different people, different methodologies must be designed so as to provide an effective health education.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los cuestionarios auto-administrados han sido comúnmente utilizados en los estudios con grandes cohortes con el fin de evaluar la actividad física de sus participantes. Como consecuencia de ello, existe una considerable cantidad de evidencias científicas sobre el efecto protector de la actividad física sobre la salud. Sin embargo, los estudios de validación que utilizan métodos objetivos para la cuantificación de la actividad física o el gasto energético (el agua doblemente marcada, los acelerómetros, los podómetros, etc.) indican que la precisión de los cuestionarios es limitada. Los cuestionarios de actividad física pueden fallar especialmente al estimar la actividad física no vigorosa, y suelen centrarse de forma desproporcionada en los tipos de ejercicios planificados (ir en bicicleta, correr, andar,…), mientras que no suelen recoger las actividades de la vida diaria y movimientos de intensidad más moderada no planificada. La estimación del gasto energético a partir de estos datos no es recomendable. Por otro lado, y a pesar de que los métodos objetivos deberían de ser la primera elección a la hora de evaluar la actividad física, los cuestionarios se mantienen como herramientas válidas y con muchas ventajas, una de ellas, el bajo coste. Este tipo de instrumentos están específicamente diseñados y validados para diferentes grupos de edad y proporcionan información valiosa e importante, sobre todo, del patrón de actividad física. Los futuros estudios requieren de más precisión a la hora de medir la actividad física respecto a la que proporcionan los cuestionarios. Podemos concluir que probablemente un método mixto que combine los métodos objetivos y subjetivos y que incluya nuevos sistemas y registros electrónicos sería lo recomendable.