989 resultados para Data creation
Resumo:
Brazil was the first country in Latin America to establish and regulate this type of reserve, and there are currently more than 700 Private Nature Heritage Reserves (RPPN in Portuguese) officially recognized by either federal or state environmental agencies. Together, these RPPN protect more than a half million hectares of land in the country. The coastal forests in the southern part of Bahia State extend 100 to 200 km inland, gradually changing in physiognomy as they occupy the dryer inland areas. The coastal forest has been subjected to intense deforestation, and currently occupies less than 10% of its original area. For this work the creation processes of the RPPN were consulted to obtain the data creation time, size of property, the condition of the remaining forest, succession chain and the last paid tax. After that, interviews with the owners were made to confirm this data. Sixteen RPPN have been established in this region until 2005. Their sizes vary from 4.7 to 800 ha. Ten of these RPPN are located within state or federal conservation areas or their buffer zones. In spite of the numerous national and international conservation strategies and environmental policies focused on the region, the present situation of the cocoa zone is threatening the conservation of the region's natural resources. The establishment of private reserves in the cocoa region could conceivably improve these conservation efforts. This type of reserve can be established under a uniform system supported by federal legislation, and could count on private organizations.
Resumo:
This report was prepared at the request of the United Nations Economic Commission for Latin America and the Caribbean (ECLAC) with support from the Caribbean Catastrophe Risk Insurance Facility (CCRIF) to assess strategies for linking the ECLAC Damage and Loss Assessment (DaLA) Methology to the Post Disaster Needs Assessment (PDNA). Each metholodolgy was individually outlined and their use in the Caribbean context was explored in detail to set the framework or lens through which their linking would be viewed. Other methologies that are used within the recovery process were identified and outlined. A gap analysis was conducted on moving from the PDNA with a focus on initial rapid reponse to DaLA. DaLA training materials were reviewed to assess where improvements can be made to seamlessly move from one methology to the next. Additionally, both DaLA and PDNA reports were reviewed to identify specific areas of information which could serve as common data links, and note how this linkage could inform the overall disaster assessments in the region. This is in addition to noting any similarities or variance in the application of both methologies. Challenges to linking both methodologies were identified such as countries lacking well defined recovery frameworks and their ability to fund or finance recovery efforts, in addition to recurrent challenges in the Caribbean region such as inadequacy of baseline data, human resource and training, and identifying teams to conduct the data collection. Recommendations made in terms of the strategies to be employed for the successful linking of both the DaLA and PDNA Methodologies included: creating and maintaining a recovery framework and baseline data; creation of a minimum requirements list for the successful implementation of PDNA and DaLA implementation; and increasing political will in addition to identify a champion to push the subject.
Resumo:
THE MAP AS A COLLABORATIVE MEDIUM FOR SPATIO-TEMPORAL VISUALIZATION This dissertation focuses on the relationship between maps and spatio-temporal data visualization. It is divided into two components: theoretical framework and practical approach. The study begins by questioning the role of the map in today’s digital society and particularly its role in visualization, and finishes with the conceptualization and development of an interactive dot map that visualizes data from Instagram and Twitter. Nowadays, geographic information is no longer produced just by experts, but also by ordinary people that are able to participate in data creation and exchange. The Web 2.0 lies in the heart of this change, where social media represent a significant tool for producing geotagged content, allowing its users to share their location and to spatially reference their publications. Furthermore, amateur mapmaking and neogeography have benefited from the emergence of several new devices that enable the creation of digital maps that are interactive, adaptable and easily shared on the Web. This study adopts a descriptive approach calling upon the diverse aspects of the map and its evolution as a medium for visualizing geotagged data, highlighting collaborative mapping as an emerging subject area that is of mandatory future research. Relevant projects are also analyzed in order to identify trends and different approaches for visualizing social media data in its spatial context, intended to support the project’s conceptualization, development and evaluation. The created map demonstrates how spatial knowledge and perception of place are now redefined by the contributions of individuals; it also shows how that activity produces new sources of geographic information, forcing the development of new techniques and approaches that allow an adequate exploration of content and visualization methods of the contemporary map
Resumo:
One dominant feature of the modern manufacturing chains is the movement of goods. Manufacturing companies would remain an unprofitable investment if the supplies/logistics of raw materials, semi-finished products or final goods are not handled in an effective way. Both levels of a modern manufacturing chain-actual production and logistics-are characterized by continuous data creation at a much faster rate than they can be meaningfully analyzed and acted upon manually. Often, instant and reliable decisions need to be taken based on huge, previously inconceivable amounts of heterogeneous, contradictory or incomplete data. The paper will highlight aspects of information flows related to business process data visibility and observability in modern manufacturing networks. An information management platform developed in the framework of the EU FP7 project ADVANCE will be presented.
Resumo:
Traditional classrooms have been often regarded as closed spaces within which experimentation, discussion and exploration of ideas occur. Professors have been used to being able to express ideas frankly, and occasionally rashly while discussions are ephemeral and conventional student work is submitted, graded and often shredded. However, digital tools have transformed the nature of privacy. As we move towards the creation of life-long archives of our personal learning, we collect material created in various 'classrooms'. Some of these are public, and open, but others were created within 'circles of trust' with expectations of privacy and anonymity by learners. Taking the Creative Commons license as a starting point, this paper looks at what rights and expectations of privacy exist in learning environments? What methods might we use to define a 'privacy license' for learning? How should the privacy rights of learners be balanced with the need to encourage open learning and with the creation of eportfolios as evidence of learning? How might we define different learning spaces and the privacy rights associated with them? Which class activities are 'private' and closed to the class, which are open and what lies between? A limited set of set of metrics or zones is proposed, along the axes of private-public, anonymous-attributable and non-commercial-commercial to define learning spaces and the digital footprints created within them. The application of these not only to the artefacts which reflect learning, but to the learning spaces, and indeed to digital media more broadly are explored. The possibility that these might inform not only teaching practice but also grading rubrics in disciplines where public engagement is required will also be explored, along with the need for consideration by educational institutions of the data rights of students.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The Water and Global Change (WATCH) project evaluation of the terrestrial water cycle involves using land surface models and general hydrological models to assess hydrologically important variables including evaporation, soil moisture, and runoff. Such models require meteorological forcing data, and this paper describes the creation of the WATCH Forcing Data for 1958–2001 based on the 40-yr ECMWF Re-Analysis (ERA-40) and for 1901–57 based on reordered reanalysis data. It also discusses and analyses modelindependent estimates of reference crop evaporation. Global average annual cumulative reference crop evaporation was selected as a widely adopted measure of potential evapotranspiration. It exhibits no significant trend from 1979 to 2001 although there are significant long-term increases in global average vapor pressure deficit and concurrent significant decreases in global average net radiation and wind speed. The near-constant global average of annual reference crop evaporation in the late twentieth century masks significant decreases in some regions (e.g., the Murray–Darling basin) with significant increases in others.
Resumo:
The success of an aquaculture breeding program critically depends on the way in which the base population of breeders is constructed since all the genetic variability for the traits included originally in the breeding goal as well as those to be included in the future is contained in the initial founders. Traditionally, base populations were created from a number of wild strains by sampling equal numbers from each strain. However, for some aquaculture species improved strains are already available and, therefore, mean phenotypic values for economically important traits can be used as a criterion to optimize the sampling when creating base populations. Also, the increasing availability of genome-wide genotype information in aquaculture species could help to refine the estimation of relationships within and between candidate strains and, thus, to optimize the percentage of individuals to be sampled from each strain. This study explores the advantages of using phenotypic and genome-wide information when constructing base populations for aquaculture breeding programs in terms of initial and subsequent trait performance and genetic diversity level. Results show that a compromise solution between diversity and performance can be found when creating base populations. Up to 6% higher levels of phenotypic performance can be achieved at the same level of global diversity in the base population by optimizing the selection of breeders instead of sampling equal numbers from each strain. The higher performance observed in the base population persisted during 10 generations of phenotypic selection applied in the subsequent breeding program.
Resumo:
Mode of access: Internet.
Resumo:
The creation of Causal Loop Diagrams (CLDs) is a major phase in the System Dynamics (SD) life-cycle, since the created CLDs express dependencies and feedback in the system under study, as well as, guide modellers in building meaningful simulation models. The cre-ation of CLDs is still subject to the modeller's domain expertise (mental model) and her ability to abstract the system, because of the strong de-pendency on semantic knowledge. Since the beginning of SD, available system data sources (written and numerical models) have always been sparsely available, very limited and imperfect and thus of little benefit to the whole modelling process. However, in recent years, we have seen an explosion in generated data, especially in all business related domains that are analysed via Business Dynamics (BD). In this paper, we intro-duce a systematic tool supported CLD creation approach, which analyses and utilises available disparate data sources within the business domain. We demonstrate the application of our methodology on a given business use-case and evaluate the resulting CLD. Finally, we propose directions for future research to further push the automation in the CLD creation and increase confidence in the generated CLDs.
Resumo:
The general objective of this work was to study the contribution of the ERP for the quality of the managerial accounting information, through the perception of managers of large sized Brazilian companies. The initial principle was that, presently, we live in an enterprise reality characterized by global and competitive worldwide scenery where the information about the enterprise performance and the evaluation of the intangible assets are necessary conditions for the survival, of the companies. The research of the exploratory type is based on a sample of 37 managers of large sized-Brazilian companies. The analysis of the data treated by means of the qualitative method showed that the great majority of the companies of the sample (86%) possess an ERP implanted. It also showed that this system is used in combination with other applicative software. The managers, in its majority, were also satisfied with the information generated in relation to the dimensions Time and Content. However, with regard to the qualitative nature of the information, the ERP made some analysis possible when the Balanced Scorecard was adopted, but information able to provide an estimate of the investments carried through in the intangible assets was not obtained. These results Suggest that in these companies ERP systems are not adequate to support strategic decisions.
Resumo:
During a naming task, time pressure and a manipulation of the proportion of related prime-target pairs were used to induce subjects to generate an expectation to the prime. On some trials, the presented target was orthographically and generally phonologically similar to the expected tal-get. The expectancy manipulation was barely detectable in the priming data but was clearly evident on a final recognition test. In addition, the recognition data showed that the nearly simultaneous activation of an expectation and sensory information derived from the orthographically and phonologically similar target produced a false memory. It is argued that this represents a blend memory.