105 resultados para Computer simulation, Colloidal systems, Nucleation
Resumo:
The literature reports research efforts allowing the editing of interactive TV multimedia documents by end-users. In this article we propose complementary contributions relative to end-user generated interactive video, video tagging, and collaboration. In earlier work we proposed the watch-and-comment (WaC) paradigm as the seamless capture of an individual`s comments so that corresponding annotated interactive videos be automatically generated. As a proof of concept, we implemented a prototype application, the WACTOOL, that supports the capture of digital ink and voice comments over individual frames and segments of the video, producing a declarative document that specifies both: different media stream structure and synchronization. In this article, we extend the WaC paradigm in two ways. First, user-video interactions are associated with edit commands and digital ink operations. Second, focusing on collaboration and distribution issues, we employ annotations as simple containers for context information by using them as tags in order to organize, store and distribute information in a P2P-based multimedia capture platform. We highlight the design principles of the watch-and-comment paradigm, and demonstrate related results including the current version of the WACTOOL and its architecture. We also illustrate how an interactive video produced by the WACTOOL can be rendered in an interactive video environment, the Ginga-NCL player, and include results from a preliminary evaluation.
Resumo:
Accessibility has become a serious issue to be considered by various sectors of the society. However, what are the differences between the perception of accessibility by academy, government and industry? In this paper, we present an analysis of this issue based on a large survey carried out with 613 participants involved with Web development, from all of the 27 Brazilian states. The paper presents results from the data analysis for each sector, along with statistical tests regarding the main different issues related to each of the sectors, such as: government and law, industry and techniques, academy and education. The concern about accessibility law is poor even amongst people from government sector. The analyses have also pointed out that the academy has not been addressing accessibility training accordingly. The knowledge about proper techniques to produce accessible contents is better than other sectors`, but still limited in industry. Stronger investments in training and in the promotion of consciousness about the law may be pointed as the most important tools to help a more effective policy on Web accessibility in Brazil.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The amount of textual information digitally stored is growing every day. However, our capability of processing and analyzing that information is not growing at the same pace. To overcome this limitation, it is important to develop semiautomatic processes to extract relevant knowledge from textual information, such as the text mining process. One of the main and most expensive stages of the text mining process is the text pre-processing stage, where the unstructured text should be transformed to structured format such as an attribute-value table. The stemming process, i.e. linguistics normalization, is usually used to find the attributes of this table. However, the stemming process is strongly dependent on the language in which the original textual information is given. Furthermore, for most languages, the stemming algorithms proposed in the literature are computationally expensive. In this work, several improvements of the well know Porter stemming algorithm for the Portuguese language, which explore the characteristics of this language, are proposed. Experimental results show that the proposed algorithm executes in far less time without affecting the quality of the generated stems.
Resumo:
Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment (map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.
Resumo:
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Del-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.
Resumo:
Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded ( captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others ( e. g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP). Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded (captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others (e.g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP).
Resumo:
Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The authors present here a summary of their investigations of ultrathin films formed by gold nanoclusters embedded in polymethylmethacrylate polymer. The clusters are formed from the self-organization of subplantated gold ions in the polymer. The source of the low energy ion stream used for the subplantation is a unidirectionally drifting gold plasma created by a magnetically filtered vacuum arc plasma gun. The material properties change according to subplantation dose, including nanocluster sizes and agglomeration state and, consequently also the material electrical behavior and optical activity. They have investigated the composite experimentally and by computer simulation in order to better understand the self-organization and the properties of the material. They present here the results of conductivity measurements and percolation behavior, dynamic TRIM simulations, surface plasmon resonance activity, transmission electron microscopy, small angle x-ray scattering, atomic force microscopy, and scanning tunneling microscopy. (C) 2010 American Vacuum Society [DOI: 10.1116/1.3357287]
Resumo:
The electronic properties of liquid hydrogen fluoride (HF) were investigated by carrying out sequential quantum mechanics/Born-Oppenheimer molecular dynamics. The structure of the liquid is in good agreement with recent experimental information. Emphasis was placed on the analysis of polarisation effects, dynamic polarisability and electronic excitations in liquid HF. Our results indicate an increase in liquid phase of the dipole moment (similar to 0.5 D) and isotropic polarisability (5%) relative to their gas-phase values. Our best estimate for the first vertical excitation energy in liquid HF indicates a blue-shift of 0.4 +/- 0.2 eV relative to that of the gas-phase monomer (10.4 eV). (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Betaine dyes are known to show very large transition energy shifts in different solvents. The ortho-betaine molecule - a simple two-ring prototype of the E-T(30) Reichardt dye - has been investigated theoretically from a combined statistical and quantum mechanics approach. Using sequential Monte Carlo (MC) simulations and MP2/cc-pVDZ calculations the in-water dipole moment of ortho-betaine is obtained as 12.30 +/- 0.05 D. This result shows a considerable increase of 75% compared to the in-vacuum dipole moment. For comparison, the use of a polarizable continuum model using the same MP2/cc-pVDZ leads to an in-water dipole moment of 11.6 D, in good agreement. This large polarization is incorporated in the classical potential for another MC simulation to generate solute-solvent configurations and to obtain the contribution of the polarization effect in the solvatochromic shift. Using statistically uncorrelated configurations and supermolecular INDO/CIS calculations, including the solute and, explicitly, 230 solvent water molecules, the statistically converged calculated shift is obtained here as 6360 cm(-1), in good agreement with the experimental result of 7550 cm(-1). (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Impurity-interstitial dipoles in calcium fluoride solutions with Al3+, Yb3+ and La3+ fluorides were studied using the thermally stimulated depolarization current (TSDC) technique. The dipolar complexes are formed by substitutional trivalent ions in Ca2+ sites and interstitial fluorine in nearest neighbor sites. The relaxations observed at 150 K are assigned to dipoles nnR(S)(3+)- F-i(-) (R-S = La or Yb). The purpose of this work is to study the processes of energy storage in the fluorides following X-ray and gamma irradiation. Computer modelling techniques are used to obtain the formation energy of dipole defects. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
This article discusses methods to identify plants by analysing leaf complexity based on estimating their fractal dimension. Leaves were analyzed according to the complexity of their internal and external shapes. A computational program was developed to process, analyze and extract the features of leaf images, thereby allowing for automatic plant identification. Results are presented from two experiments, the first to identify plant species from the Brazilian Atlantic forest and Brazilian Cerrado scrublands, using fifty leaf samples from ten different species, and the second to identify four different species from genus Passiflora, using twenty leaf samples for each class. A comparison is made of two methods to estimate fractal dimension (box-counting and multiscale Minkowski). The results are discussed to determine the best approach to analyze shape complexity based on the performance of the technique, when estimating fractal dimension and identifying plants. (C) 2008 Elsevier Inc. All rights reserved.