877 resultados para Information Visualization Environment
Information overload, choice deferral, and moderating role of need for cognition: Empirical evidence
Resumo:
ABSTRACT Choice deferral due to information overload is an undesirable result of competitive environments. The neoclassical maximization models predict that choice avoidance will not increase as more information is offered to consumers. The theories developed in the consumer behavior field predict that some properties of the environment may lead to behavioral effects and an increase in choice avoidance due to information overload. Based on stimuli generated experimentally and tested among 1,000 consumers, this empirical research provides evidence for the presence of behavioral effects due to information overload and reveals the different effects of increasing the number of options or the number of attributes. This study also finds that the need for cognition moderates these behavioral effects, and it proposes psychological processes that may trigger the effects observed.
Resumo:
Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.
Resumo:
The problem of jointly estimating the number, the identities, and the data of active users in a time-varying multiuser environment was examined in a companion paper (IEEE Trans. Information Theory, vol. 53, no. 9, September 2007), at whose core was the use of the theory of finite random sets on countable spaces. Here we extend that theory to encompass the more general problem of estimating unknown continuous parameters of the active-user signals. This problem is solved here by applying the theory of random finite sets constructed on hybrid spaces. We doso deriving Bayesian recursions that describe the evolution withtime of a posteriori densities of the unknown parameters and data.Unlike in the above cited paper, wherein one could evaluate theexact multiuser set posterior density, here the continuous-parameter Bayesian recursions do not admit closed-form expressions. To circumvent this difficulty, we develop numerical approximationsfor the receivers that are based on Sequential Monte Carlo (SMC)methods (“particle filtering”). Simulation results, referring to acode-divisin multiple-access (CDMA) system, are presented toillustrate the theory.
Resumo:
This final year project presents the design principles and prototype implementation of BIMS (Biomedical Information Management System), a flexible software system which provides an infrastructure to manage all information required by biomedical research projects.The BIMS project was initiated with the motivation to solve several limitations in medical data acquisition of some research projects, in which Universitat Pompeu Fabra takes part. These limitations,based on the lack of control mechanisms to constraint information submitted by clinicians, impact on the data quality, decreasing it.BIMS can easily be adapted to manage information of a wide variety of clinical studies, not being limited to a given clinical specialty. The software can manage both, textual information, like clinical data (measurements, demographics, diagnostics, etc ...), as well as several kinds of medical images (magnetic resonance imaging, computed tomography, etc ...). Moreover, BIMS provides a web - based graphical user interface and is designed to be deployed in a distributed andmultiuser environment. It is built on top of open source software products and frameworks.Specifically, BIMS has been used to represent all clinical data being currently used within the CardioLab platform (an ongoing project managed by Universitat Pompeu Fabra), demonstratingthat it is a solid software system, which could fulfill requirements of a real production environment.
Resumo:
A report by the Iowa Department of Natural Resources on how to manage water quality information.
Resumo:
Water fact sheet for Iowa Department of Natural Resources and the Geological Bureau.
Resumo:
Organisations are becoming increasingly aware of the need for management information systems, due largely to the changing environment and a continuous process of globalisation. All of this means that managers need to adapt the structures of their organisations to the changes and, therefore, to plan, control and manage better. The Spanish public university cannot avoid this changing (demographic, economic and social changes) and globalising (among them the convergence of European qualifications) environment, to which we must add the complex organisation structure, characterised by a high dispersion of authority for decision making in different collegiate and unipersonal organs. It seems obvious that these changes must have repercussions on the direction, organisation and management structures of those public higher education institutions, and it seems natural that, given this environment, the universities must adapt their present management systems to the demand by society for the quality and suitability of the services they provide.
Resumo:
Recent technological progress has greatly facilitated de novo genome sequencing. However, de novo assemblies consist in many pieces of contiguous sequence (contigs) arranged in thousands of scaffolds instead of small numbers of chromosomes. Confirming and improving the quality of such assemblies is critical for subsequent analysis. We present a method to evaluate genome scaffolding by aligning independently obtained transcriptome sequences to the genome and visually summarizing the alignments using the Cytoscape software. Applying this method to the genome of the red fire ant Solenopsis invicta allowed us to identify inconsistencies in 7%, confirm contig order in 20% and extend 16% of scaffolds.Scripts that generate tables for visualization in Cytoscape from FASTA sequence and scaffolding information files are publicly available at https://github.com/ksanao/TGNet.
Resumo:
How much information does an auctioneer want bidders to have in a private value environment?We address this question using a novel approach to ordering information structures based on the property that in private value settings more information leads to a more disperse distribution of buyers updated expected valuations. We define the class of precision criteria following this approach and different notions of dispersion, and relate them to existing criteria of informativeness. Using supermodular precision, we obtain three results: (1) a more precise information structure yields a more efficient allocation; (2) the auctioneer provides less than the efficient level of information since more information increases bidder informational rents; (3) there is a strategic complementarity between information and competition, so that both the socially efficient and the auctioneer s optimal choice of precision increase with the number of bidders, and both converge as the number of bidders goes to infinity.
Resumo:
With the failure of the traditional mechanisms of distributing bibliographic materials into developing countries, digital libraries show up as a strong alternative in accomplishing such job, despite the challenges of the digital divide. This paper discusses the challenges of building a digital library (DL) in a developing country. The case of Cape Verde as a digital divide country is analyzed, in terms of current digital library usage and its potentiality for fighting the difficulties in accessing bibliographic resources in the country. The paper also introduces an undergoing project of building a digital library at the University Jean Piaget of Cape Verde.
Resumo:
Despite numerous discussions, workshops, reviews and reports about responsible development of nanotechnology, information describing health and environmental risk of engineered nanoparticles or nanomaterials is severely lacking and thus insufficient for completing rigorous risk assessment on their use. However, since preliminary scientific evaluations indicate that there are reasonable suspicions that activities involving nanomaterials might have damaging effects on human health; the precautionary principle must be applied. Public and private institutions as well as industries have the duty to adopt preventive and protective measures proportionate to the risk intensity and the desired level of protection. In this work, we present a practical, 'user-friendly' procedure for a university-wide safety and health management of nanomaterials, developed as a multi-stakeholder effort (government, accident insurance, researchers and experts for occupational safety and health). The process starts using a schematic decision tree that allows classifying the nano laboratory into three hazard classes similar to a control banding approach (from Nano 3 - highest hazard to Nano1 - lowest hazard). Classifying laboratories into risk classes would require considering actual or potential exposure to the nanomaterial as well as statistical data on health effects of exposure. Due to the fact that these data (as well as exposure limits for each individual material) are not available, risk classes could not be determined. For each hazard level we then provide a list of required risk mitigation measures (technical, organizational and personal). The target 'users' of this safety and health methodology are researchers and safety officers. They can rapidly access the precautionary hazard class of their activities and the corresponding adequate safety and health measures. We succeed in convincing scientist dealing with nano-activities that adequate safety measures and management are promoting innovation and discoveries by ensuring them a safe environment even in the case of very novel products. The proposed measures are not considered as constraints but as a support to their research. This methodology is being implemented at the Ecole Polytechnique de Lausanne in over 100 research labs dealing with nanomaterials. It is our opinion that it would be useful to other research and academia institutions as well. [Authors]
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.
Resumo:
A previous study sponsored by the Smart Work Zone Deployment Initiative, “Feasibility of Visualization and Simulation Applications to Improve Work Zone Safety and Mobility,” demonstrated the feasibility of combining readily available, inexpensive software programs, such as SketchUp and Google Earth, with standard two-dimensional civil engineering design programs, such as MicroStation, to create animations of construction work zones. The animations reflect changes in work zone configurations as the project progresses, representing an opportunity to visually present complex information to drivers, construction workers, agency personnel, and the general public. The purpose of this study is to continue the work from the previous study to determine the added value and resource demands created by including more complex data, specifically traffic volume, movement, and vehicle type. This report describes the changes that were made to the simulation, including incorporating additional data and converting the simulation from a desktop application to a web application.
Resumo:
The graphical representation of spatial soil properties in a digital environment is complex because it requires a conversion of data collected in a discrete form onto a continuous surface. The objective of this study was to apply three-dimension techniques of interpolation and visualization on soil texture and fertility properties and establish relationships with pedogenetic factors and processes in a slope area. The GRASS Geographic Information System was used to generate three-dimensional models and ParaView software to visualize soil volumes. Samples of the A, AB, BA, and B horizons were collected in a regular 122-point grid in an area of 13 ha, in Pinhais, PR, in southern Brazil. Geoprocessing and graphic computing techniques were effective in identifying and delimiting soil volumes of distinct ranges of fertility properties confined within the soil matrix. Both three-dimensional interpolation and the visualization tool facilitated interpretation in a continuous space (volumes) of the cause-effect relationships between soil texture and fertility properties and pedological factors and processes, such as higher clay contents following the drainage lines of the area. The flattest part with more weathered soils (Oxisols) had the highest pH values and lower Al3+ concentrations. These techniques of data interpolation and visualization have great potential for use in diverse areas of soil science, such as identification of soil volumes occurring side-by-side but that exhibit different physical, chemical, and mineralogical conditions for plant root growth, and monitoring of plumes of organic and inorganic pollutants in soils and sediments, among other applications. The methodological details for interpolation and a three-dimensional view of soil data are presented here.
Resumo:
This study assesses gender differences in spatial and non-spatial relational learning and memory in adult humans behaving freely in a real-world, open-field environment. In Experiment 1, we tested the use of proximal landmarks as conditional cues allowing subjects to predict the location of rewards hidden in one of two sets of three distinct locations. Subjects were tested in two different conditions: (1) when local visual cues marked the potentially-rewarded locations, and (2) when no local visual cues marked the potentially-rewarded locations. We found that only 17 of 20 adults (8 males, 9 females) used the proximal landmarks to predict the locations of the rewards. Although females exhibited higher exploratory behavior at the beginning of testing, males and females discriminated the potentially-rewarded locations similarly when local visual cues were present. Interestingly, when the spatial and local information conflicted in predicting the reward locations, males considered both spatial and local information, whereas females ignored the spatial information. However, in the absence of local visual cues females discriminated the potentially-rewarded locations as well as males. In Experiment 2, subjects (9 males, 9 females) were tested with three asymmetrically-arranged rewarded locations, which were marked by local cues on alternate trials. Again, females discriminated the rewarded locations as well as males in the presence or absence of local cues. In sum, although particular aspects of task performance might differ between genders, we found no evidence that women have poorer allocentric spatial relational learning and memory abilities than men in a real-world, open-field environment.