932 resultados para Knowledge Information Objects
Resumo:
We extend Aumann's [3] theorem deriving correlated equilibria as a consequence of common priors and common knowledge of rationality by explicitly allowing for non-rational behavior. We replace the assumption of common knowledge of rationality with a substantially weaker notion, joint p-belief of rationality, where agents believe the other agents are rational with probabilities p = (pi)i2I or more. We show that behavior in this case constitutes a constrained correlated equilibrium of a doubled game satisfying certain p-belief constraints and characterize the topological structure of the resulting set of p-rational outcomes. We establish continuity in the parameters p and show that, for p su ciently close to one, the p-rational outcomes are close to the correlated equilibria and, with high probability, supported on strategies that survive the iterated elimination of strictly dominated strategies. Finally, we extend Aumann and Dreze's [4] theorem on rational expectations of interim types to the broader p-rational belief systems, and also discuss the case of non-common priors.
Resumo:
In a time when Technology Supported Learning Systems are being widely used, there is a lack of tools that allows their development in an automatic or semi-automatic way. Technology Supported Learning Systems require an appropriate Domain Module, ie. the pedagogical representation of the domain to be mastered, in order to be effective. However, content authoring is a time and effort consuming task, therefore, efforts in automatising the Domain Module acquisition are necessary.Traditionally, textbooks have been used as the main mechanism to maintain and transmit the knowledge of a certain subject or domain. Textbooks have been authored by domain experts who have organised the contents in a means that facilitate understanding and learning, considering pedagogical issues.Given that textbooks are appropriate sources of information, they can be used to facilitate the development of the Domain Module allowing the identification of the topics to be mastered and the pedagogical relationships among them, as well as the extraction of Learning Objects, ie. meaningful fragments of the textbook with educational purpose.Consequently, in this work DOM-Sortze, a framework for the semi-automatic construction of Domain Modules from electronic textbooks, has been developed. DOM-Sortze uses NLP techniques, heuristic reasoning and ontologies to fulfill its work. DOM-Sortze has been designed and developed with the aim of automatising the development of the Domain Module, regardless of the subject, promoting the knowledge reuse and facilitating the collaboration of the users during the process.
Resumo:
Executive Summary: Observations show that warming of the climate is unequivocal. The global warming observed over the past 50 years is due primarily to human-induced emissions of heat-trapping gases. These emissions come mainly from the burning of fossil fuels (coal, oil, and gas), with important contributions from the clearing of forests, agricultural practices, and other activities. Warming over this century is projected to be considerably greater than over the last century. The global average temperature since 1900 has risen by about 1.5ºF. By 2100, it is projected to rise another 2 to 11.5ºF. The U.S. average temperature has risen by a comparable amount and is very likely to rise more than the global average over this century, with some variation from place to place. Several factors will determine future temperature increases. Increases at the lower end of this range are more likely if global heat-trapping gas emissions are cut substantially. If emissions continue to rise at or near current rates, temperature increases are more likely to be near the upper end of the range. Volcanic eruptions or other natural variations could temporarily counteract some of the human-induced warming, slowing the rise in global temperature, but these effects would only last a few years. Reducing emissions of carbon dioxide would lessen warming over this century and beyond. Sizable early cuts in emissions would significantly reduce the pace and the overall amount of climate change. Earlier cuts in emissions would have a greater effect in reducing climate change than comparable reductions made later. In addition, reducing emissions of some shorter-lived heat-trapping gases, such as methane, and some types of particles, such as soot, would begin to reduce warming within weeks to decades. Climate-related changes have already been observed globally and in the United States. These include increases in air and water temperatures, reduced frost days, increased frequency and intensity of heavy downpours, a rise in sea level, and reduced snow cover, glaciers, permafrost, and sea ice. A longer ice-free period on lakes and rivers, lengthening of the growing season, and increased water vapor in the atmosphere have also been observed. Over the past 30 years, temperatures have risen faster in winter than in any other season, with average winter temperatures in the Midwest and northern Great Plains increasing more than 7ºF. Some of the changes have been faster than previous assessments had suggested. These climate-related changes are expected to continue while new ones develop. Likely future changes for the United States and surrounding coastal waters include more intense hurricanes with related increases in wind, rain, and storm surges (but not necessarily an increase in the number of these storms that make landfall), as well as drier conditions in the Southwest and Caribbean. These changes will affect human health, water supply, agriculture, coastal areas, and many other aspects of society and the natural environment. This report synthesizes information from a wide variety of scientific assessments (see page 7) and recently published research to summarize what is known about the observed and projected consequences of climate change on the United States. It combines analysis of impacts on various sectors such as energy, water, and transportation at the national level with an assessment of key impacts on specific regions of the United States. For example, sea-level rise will increase risks of erosion, storm surge damage, and flooding for coastal communities, especially in the Southeast and parts of Alaska. Reduced snowpack and earlier snow melt will alter the timing and amount of water supplies, posing significant challenges for water resource management in the West. (PDF contains 196 pages)
Resumo:
This research is part of the Socioeconomic Research & Monitoring Program for the Florida Keys National Marine Sanctuary (FKNMS), which was initiated in 1998. In 1995-96, a baseline study on the knowledge, attitudes and perceptions of proposed FKNMS management strategies and regulations of commercial fishers, dive operators and on selected environmental group members was conducted by researchers at the University of Florida and the University of Miami’s Rosenstiel School of Atmospheric and Marine Science (RSMAS). The baseline study was funded by the U.S. Man and the Biosphere Program, and components of the study were published by Florida Sea Grant and in several peer reviewed journals. The study was accepted into the Socioeconomic Research & Monitoring Program at a workshop to design the program in 1998, and workshop participants recommended that the study be replicated every ten years. The 10-year replication was conducted in 2004-05 (commercial fishers) 2006 (dive operators) and 2007 (environmental group members) by the same researchers at RSMAS, while the University of Florida researchers were replaced by Thomas J. Murray & Associates, Inc., which conducted the commercial fishing panels in the FKNMS. The 10-year replication study was funded by NOAA’s Coral Reef Conservation Program. The study not only makes 10-year comparisons in the knowledge, attitudes and perceptions of FKNMS management strategies and regulations, but it also establishes new baselines for future monitoring efforts. Things change, and following the principles of “adaptive management”, management has responded with changes in the management plan strategies and regulations. Some of the management strategies and regulations that were being proposed at the time of the baseline 1995-96 study were changed before the management plan and regulations went into effect in July 1997. This was especially true for the main focus of the study which was the various types of marine zones in the draft and final zoning action plan. Some of the zones proposed were changed significantly and subsequently new zones have been created. This study includes 10-year comparisons of socioeconomic/demographic profiles of each user group; sources and usefulness of information; knowledge of purposes of FKNMS zones; perceived beneficiaries of the FKNMS zones; views on FKNMS processes to develop management strategies and regulations; views on FKNMS zone outcomes; views on FKNMS performance; and general support for FKNMS. In addition to new baseline information on FKNMS zones, new baseline information was developed for spatial use, investment and costs-and-earnings for commercial fishers and dive operators, and views on resource conditions for all three user groups. Statistical tests were done to detect significant changes in both the distribution of responses to questions and changes in mean scores for items replicated over the 10-year period. (PDF has 143 pages.)
Resumo:
Organised by Knowledge Exchange & the Nordbib programme 11 June 2012, 8:30-12:30, Copenhagen Adjacent to the Nordbib conference 'Structural frameworks for open, digital research' Participants in break out discussion during the workshop on cost modelsThe Knowledge Exchange and the Nordbib programme organised a workshop on cost models for the preservation and management of digital collections. The rapid growth of the digital information which a wide range of institutions must preserve emphasizes the need for robust cost modelling. Such models should enable these institutions to assess both what resources are needed to sustain their digital preservation activities and allow comparisons of different preservation solutions in order to select the most cost-efficient alternative. In order to justify the costs institutions also need to describe the expected benefits of preserving digital information. This workshop provided an overview of existing models and demonstrated the functionality of some of the current cost tools. It considered the specific economic challenges with regard to the preservation of research data and addressed the benefits of investing in the preservation of digital information. Finally, the workshop discussed international collaboration on cost models. The aim of the workshop was to facilitate understanding of the economies of data preservation and to discuss the value of developing an international benchmarking model for the costs and benefits of digital preservation. The workshop took place in the Danish Agency for Culture and was planned directly prior to the Nordbib conference 'Structural frameworks for open, digital research'
Resumo:
In today’s changing research environment, RDM is important in all stages of research. The skills and know-how in RDM that researchers and research support staff need, should be nurtured all though their career. At the end of 2015, KE initiated a project to compare approaches in RDM training within the partnership’s five member countries. The project was structured around two strands of activity: In the last months of 2015 a survey was conducted to collect information on current practice around RDM training, in order to provide an overview of the RDM training landscape. In February 2016 a workshop was held to share successful approaches to RDM training and capacity building provided within institutions and by infrastructure. The report describes the outputs of both the analysis of the survey and the outcomes of the workshop. The document provides an evidence base and informed suggestions to help improve RDM training practices in KE partner countries and beyond.
Resumo:
Many sources of information that discuss currents problems of food security point to the importance of farmed fish as an ideal food source that can be grown by poor farmers, (Asian Development Bank 2004). Furthermore, the development of improved strains of fish suitable for low-input aquaculture such as Tilapia, has demonstrated the feasibility of an approach that combines “cutting edge science” with accessible technology, as a means for improving the nutrition and livelihoods of both the urban poor and poor farmers in developing countries (Mair et al. 2002). However, the use of improved strains of fish as a means of reducing hunger and improving livelihoods has proved to be difficult to sustain, especially as a public good, when external (development) funding sources devoted to this area are minimal1. In addition, the more complicated problem of delivery of an aquaculture system, not just improved fish strains and the technology, can present difficulties and may go explicitly unrecognized (from Sissel Rogne, as cited by Silje Rem 2002). Thus, the involvement of private partners has featured prominently in the strategy for transferring to the public technology related to improved Tilapia strains. Partnering with the private sector in delivery schemes to the poor should take into account both the public goods aspect and the requirement that the traits selected for breeding “improved” strains meet the actual needs of the resource poor farmer. Other dissemination approaches involving the public sector may require a large investment in capacity building. However, the use of public sector institutions as delivery agents encourages the maintaining of the “public good” nature of the products.
Resumo:
[EN] This paper is based in the following project:
Resumo:
This thesis presents a biologically plausible model of an attentional mechanism for forming position- and scale-invariant representations of objects in the visual world. The model relies on a set of control neurons to dynamically modify the synaptic strengths of intra-cortical connections so that information from a windowed region of primary visual cortex (Vl) is selectively routed to higher cortical areas. Local spatial relationships (i.e., topography) within the attentional window are preserved as information is routed through the cortex, thus enabling attended objects to be represented in higher cortical areas within an object-centered reference frame that is position and scale invariant. The representation in V1 is modeled as a multiscale stack of sample nodes with progressively lower resolution at higher eccentricities. Large changes in the size of the attentional window are accomplished by switching between different levels of the multiscale stack, while positional shifts and small changes in scale are accomplished by translating and rescaling the window within a single level of the stack. The control signals for setting the position and size of the attentional window are hypothesized to originate from neurons in the pulvinar and in the deep layers of visual cortex. The dynamics of these control neurons are governed by simple differential equations that can be realized by neurobiologically plausible circuits. In pre-attentive mode, the control neurons receive their input from a low-level "saliency map" representing potentially interesting regions of a scene. During the pattern recognition phase, control neurons are driven by the interaction between top-down (memory) and bottom-up (retinal input) sources. The model respects key neurophysiological, neuroanatomical, and psychophysical data relating to attention, and it makes a variety of experimentally testable predictions.
Resumo:
Projects of the scope of the restoration of the Florida Everglades require substantial information regarding ecological mechanisms, and these are often poorly understood. We provide critical base knowledge for Everglades restoration by characterizing the existing vegetation communities of an Everglades remnant, describing how present and historic hydrology affect wetland vegetation community composition, and documenting change from communities described in previous studies. Vegetation biomass samples were collected along transects across Water Conservation Area 3A South (3AS).
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
Com a necessidade de extrair as informações contidas nas imagens de satélite de forma rápida, eficiente e econômica, são utilizadas cada vez mais as técnicas computacionais de processamento de imagens como a de segmentação automática. Segmentar uma imagem consiste em dividí-la em regiões através de um critério de similaridade, onde os pixels que estão contidos nestas possuem características semelhantes, como por exemplo, nível de cinza, textura, ou seja, a que melhor represente os objetos presentes na imagem. Existem vários exemplos de algoritmos segmentadores, como o de crescimento de regiões onde os pixels crescem e são aglutinados formando regiões. Para determinar quais os melhores parâmetros utilizados nestes algoritmos segmentadores é necessário que se avalie os resultados a partir dos métodos mais utilizados, que são os supervisionados onde há necessidade de uma imagem de referência, considerada ideal fazendo com que se tenha um conhecimento a priori da região de estudo. Os não supervisionados, onde não há a necessidade de uma imagem de referência, fazendo com que o usuário economize tempo. Devido à dificuldade de se obter avaliadores para diferentes tipos de imagem, é proposta a metodologia que permite avaliar imagens que possuam áreas com vegetação, onde serão formadas grandes regiões (Crianass) e o que avaliará as imagens com áreas urbanas onde será necessário mais detalhamento (Cranassir).