961 resultados para automated knowledge visualization
Resumo:
Purpose: The purpose of this paper is to identify factors that facilitate tacit knowledge sharing in unstructured work environments, such as those found in automated production lines. Design/methodology/approach: The study is based on a qualitative approach, and it draws data from a four-month field study at a blown-molded glass factory. Data collection techniques included interviews, informal conversations and on-site observations, and data were interpreted using content analysis. Findings: The results indicated that sharing of tacit knowledge is facilitated by an engaging environment. An engaging environment is supported by shared language and knowledge, which are developed through intense communication and a strong sense of collegiality and a social climate that is dominated by openness and trust. Other factors that contribute to the creation of an engaging environment include managerial efforts to provide appropriate work conditions and to communicate company goals, and HRM practices such as the provision of formal training, on-the-job training and incentives. Practical implications: This paper clarifies the scope of managerial actions that impact knowledge creation and sharing among blue-collar workers. Originality/value: Despite the acknowledgement of the importance of blue-collar workers' knowledge, both the knowledge management and operations management literatures have devoted limited attention to it. Studies related to knowledge management in unstructured working environments are also not abundant. © Emerald Group Publishing Limited.
Resumo:
This paper reports a research to evaluate the potential and the effects of use of annotated Paraconsistent logic in automatic indexing. This logic attempts to deal with contradictions, concerned with studying and developing inconsistency-tolerant systems of logic. This logic, being flexible and containing logical states that go beyond the dichotomies yes and no, permits to advance the hypothesis that the results of indexing could be better than those obtained by traditional methods. Interactions between different disciplines, as information retrieval, automatic indexing, information visualization, and nonclassical logics were considered in this research. From the methodological point of view, an algorithm for treatment of uncertainty and imprecision, developed under the Paraconsistent logic, was used to modify the values of the weights assigned to indexing terms of the text collections. The tests were performed on an information visualization system named Projection Explorer (PEx), created at Institute of Mathematics and Computer Science (ICMC - USP Sao Carlos), with available source code. PEx uses traditional vector space model to represent documents of a collection. The results were evaluated by criteria built in the information visualization system itself, and demonstrated measurable gains in the quality of the displays, confirming the hypothesis that the use of the para-analyser under the conditions of the experiment has the ability to generate more effective clusters of similar documents. This is a point that draws attention, since the constitution of more significant clusters can be used to enhance information indexing and retrieval. It can be argued that the adoption of non-dichotomous (non-exclusive) parameters provides new possibilities to relate similar information.
Resumo:
The development of self-adaptive software (SaS) has specific characteristics compared to traditional one, since it allows that changes to be incorporated at runtime. Automated processes have been used as a feasible solution to conduct the software adaptation at runtime. In parallel, reference model has been used to aggregate knowledge and architectural artifacts, since capture the systems essence of specific domains. However, there is currently no reference model based on reflection for the development of SaS. Thus, the main contribution of this paper is to present a reference model based on reflection for development of SaS that have a need to adapt at runtime. To present the applicability of this model, a case study was conducted and good perspective to efficiently contribute to the area of SaS has been obtained.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In [1], the authors proposed a framework for automated clustering and visualization of biological data sets named AUTO-HDS. This letter is intended to complement that framework by showing that it is possible to get rid of a user-defined parameter in a way that the clustering stage can be implemented more accurately while having reduced computational complexity
Resumo:
Among the ongoing attempts to enhance cognitive performance, an emergent and yet underrepresented venue is brought by hemoencefalographic neurofeedback (HEG). This paper presents three related advances in HEG neurofeedback for cognitive enhancement: a) a new HEG protocol for cognitive enhancement, as well as b) the results of independent measures of biological efficacy (EEG brain maps) extracted in three phases, during a one year follow up case study; c) the results of the first controlled clinical trial of HEG, designed to assess the efficacy of the technique for cognitive enhancement of an adult and neurologically intact population. The new protocol was developed in the environment of a software that organizes digital signal algorithms in a flowchart format. Brain maps were produced through 10 brain recordings. The clinical trial used a working memory test as its independent measure of achievement. The main conclusion of this study is that the technique appears to be clinically promising. Approaches to cognitive performance from a metabolic viewpoint should be explored further. However, it is particularly important to note that, to our knowledge, this is the world's first controlled clinical study on the matter and it is still early for an ultimate evaluation of the technique.
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
It is one of the most important tasks of the forensic pathologist to explain the forensically relevant medical findings to medical non-professionals. However, it is often difficult to comment on the nature and potential consequences of organ injuries in a comprehensive way to individuals with limited knowledge of anatomy and physiology. This rare case of survived pancreatic transaction after kicks to the abdomen illustrates how the application of dedicated software programs for three-dimensional reconstruction can overcome these difficulties, allowing for clear and concise visualization of complex findings.
Resumo:
Vietnam has developed rapidly over the past 15 years. However, progress was not uniformly distributed across the country. Availability, adequate visualization and analysis of spatially explicit data on socio-economic and environmental aspects can support both research and policy towards sustainable development. Applying appropriate mapping techniques allows gleaning important information from tabular socio-economic data. Spatial analysis of socio-economic phenomena can yield insights into locally-specifi c patterns and processes that cannot be generated by non-spatial applications. This paper presents techniques and applications that develop and analyze spatially highly disaggregated socioeconomic datasets. A number of examples show how such information can support informed decisionmaking and research in Vietnam.
Resumo:
Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.
Resumo:
The small trees of gas-exchanging pulmonary airways which are fed by the most distal purely conducting airways are called acini and represent the functional gas-exchanging units. The three-dimensional architecture of the acini has a strong influence on ventilation and particle deposition. Due to the difficulty to identify individual acini on microscopic lung sections the knowledge about the number of acini and their biological parameters like volume, surface area, and number of alveoli per acinus are limited. We developed a method to extract individual acini from lungs imaged by high-resolution synchrotron radiation based X-ray tomographic microscopy and estimated their volume, surface area and number of alveoli. Rat acini were isolated by semiautomatically closing the airways at the transition from conducting to gas-exchanging airways. We estimated a mean internal acinar volume of 1.148mm(3), a mean acinar surface area of 73.9mm(2), and a mean of 8470 alveoli per acinus. Assuming that the acini are similarly sized throughout different regions of the lung, we calculated that a rat lung contains 5470±833 acini. We conclude that our novel approach is well suited for the fast and reliable characterization of a large number of individual acini in healthy, diseased, or transgenic lungs of different species including humans.
Resumo:
Self – assembly is a powerful tool for the construction of highly organized nanostructures. Therefore, the possibility to control and predict pathways of molecular ordering on the nanoscale level is a critical issue for the production of materials with tunable and adaptive macroscopic properties. 2D polymers are attractive objects for the field of material sciences due to their exceptional properties. [1] As shown before, amphiphilic oligopyrenotides (produced via automated solid-phase synthesis) form rod–like supramolecular polymers in water. [2] These assemblies form 1D objects. [3] By applying certain changes to the design of the oligopyrenotide units the dimensionality of the formed assemblies can be influenced. Herein, we demonstrate that Py3 (see Figure 1) forms defined supramolecular assemblies under thermodynamic conditions in water. To study Py3 self-assembly, we carried out whole set of spectroscopic (UV/vis, fluorescence, DLS) and microscopic experiments (AFM). The obtained results suggest that oligopyrenotides with the present type of geometry and linker length leads to formation of 2D supramolecular assemblies.
Resumo:
The MQN-mapplet is a Java application giving access to the structure of small molecules in large databases via color-coded maps of their chemical space. These maps are projections from a 42-dimensional property space defined by 42 integer value descriptors called molecular quantum numbers (MQN), which count different categories of atoms, bonds, polar groups, and topological features and categorize molecules by size, rigidity, and polarity. Despite its simplicity, MQN-space is relevant to biological activities. The MQN-mapplet allows localization of any molecule on the color-coded images, visualization of the molecules, and identification of analogs as neighbors on the MQN-map or in the original 42-dimensional MQN-space. No query molecule is necessary to start the exploration, which may be particularly attractive for nonchemists. To our knowledge, this type of interactive exploration tool is unprecedented for very large databases such as PubChem and GDB-13 (almost one billion molecules). The application is freely available for download at www.gdb.unibe.ch.
Resumo:
This chapter introduces a conceptual model to combine creativity techniques with fuzzy cognitive maps (FCMs) and aims to support knowledge management methods by improving expert knowledge acquisition and aggregation. The aim of the conceptual model is to represent acquired knowledge in a manner that is as computer-understandable as possible with the intention of developing automated reasoning in the future as part of intelligent information systems. The formal represented knowledge thus may provide businesses with intelligent information integration. To this end, we introduce and evaluate various creativity techniques with a list of attributes to define the most suitable to combine with FCMs. This proposed combination enables enhanced knowledge management through the acquisition and representation of expert knowledge with FCMs. Our evaluation indicates that the creativity technique known as mind mapping is the most suitable technique in our set. Finally, a scenario from stakeholder management demonstrates the combination of mind mapping with FCMs as an integrated system.