896 resultados para store
Resumo:
There are a large number of agronomic-ecological interactions that occur in a world with increasing levels of CO2, higher temperatures and a more variable climate. Climate change and the associated severe problems will alter soil microbial populations and diversity. Soils supply many atmospheric green house gases by performing as sources or sinks. The most important of these gases include CH4, CO2 and N2O. Most of the green house gases production and consumption processes in soil are probably due to microorganisms. There is strong inquisitiveness to store carbon (C) in soils to balance global climate change. Microorganisms are vital to C sequestration by mediating putrefaction and controlling the paneling of plant residue-C between CO2 respiration losses or storage in semi-permanent soil-C pools. Microbial population groups and utility can be manipulated or distorted in the course of disturbance and C inputs to either support or edge the retention of C. Fungi play a significant role in decomposition and appear to produce organic matter that is more recalcitrant and favor long-term C storage and thus are key functional group to focus on in developing C sequestration systems. Plant residue chemistry can influence microbial communities and C loss or flow into soil C pools. Therefore, as research takings to maximize C sequestration for agricultural and forest ecosystems - moreover plant biomass production, similar studies should be conducted on microbial communities that considers the environmental situations
Resumo:
Long Term Digital Preservation (LTDP) is a secure and trustworthy mechanism to ingest, process, store, manage, protect, find, access, and interpret digital information such that the same information can be used at some arbitrary point in the future in spite of obsolescence of everything: hardware, software, processes, format, people, etc
Resumo:
In this computerized, globalised and internet world our computer collects various types of information’s about every human being and stores them in files secreted deep on its hard drive. Files like cache, browser history and other temporary Internet files can be used to store sensitive information like logins and passwords, names addresses, and even credit card numbers. Now, a hacker can get at this information by wrong means and share with someone else or can install some nasty software on your computer that will extract your sensitive and secret information. Identity Theft posses a very serious problem to everyone today. If you have a driver’s license, a bank account, a computer, ration card number, PAN card number, ATM card or simply a social security number you are more than at risk, you are a target. Whether you are new to the idea of ID Theft, or you have some unanswered questions, we’ve compiled a quick refresher list below that should bring you up to speed. Identity theft is a term used to refer to fraud that involves pretending to be someone else in order to steal money or get other benefits. Identity theft is a serious crime, which is increasing at tremendous rate all over the world after the Internet evolution. There is widespread agreement that identity theft causes financial damage to consumers, lending institutions, retail establishments, and the economy as a whole. Surprisingly, there is little good public information available about the scope of the crime and the actual damages it inflicts. Accounts of identity theft in recent mass media and in film or literature have centered on the exploits of 'hackers' - variously lauded or reviled - who are depicted as cleverly subverting corporate firewalls or other data protection defenses to gain unauthorized access to credit card details, personnel records and other information. Reality is more complicated, with electronic identity fraud taking a range of forms. The impact of those forms is not necessarily quantifiable as a financial loss; it can involve intangible damage to reputation, time spent dealing with disinformation and exclusion from particular services because a stolen name has been used improperly. Overall we can consider electronic networks as an enabler for identity theft, with the thief for example gaining information online for action offline and the basis for theft or other injury online. As Fisher pointed out "These new forms of hightech identity and securities fraud pose serious risks to investors and brokerage firms across the globe," I am a victim of identity theft. Being a victim of identity theft I felt the need for creating an awareness among the computer and internet users particularly youngsters in India. Nearly 70 per cent of Indian‘s population are living in villages. Government of India already started providing computer and internet facilities even to the remote villages through various rural development and rural upliftment programmes. Highly educated people, established companies, world famous financial institutions are becoming victim of identity theft. The question here is how vulnerable the illiterate and innocent rural people are if they suddenly exposed to a new device through which some one can extract and exploit their personal data without their knowledge? In this research work an attempt has been made to bring out the real problems associated with Identity theft in developed countries from an economist point of view.
Resumo:
Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.
Resumo:
This paper presents a Reinforcement Learning (RL) approach to economic dispatch (ED) using Radial Basis Function neural network. We formulate the ED as an N stage decision making problem. We propose a novel architecture to store Qvalues and present a learning algorithm to learn the weights of the neural network. Even though many stochastic search techniques like simulated annealing, genetic algorithm and evolutionary programming have been applied to ED, they require searching for the optimal solution for each load demand. Also they find limitation in handling stochastic cost functions. In our approach once we learn the Q-values, we can find the dispatch for any load demand. We have recently proposed a RL approach to ED. In that approach, we could find only the optimum dispatch for a set of specified discrete values of power demand. The performance of the proposed algorithm is validated by taking IEEE 6 bus system, considering transmission losses
Resumo:
The study covers theFishing capture technology innovation includes the catching of aquatic animal, using any kind of gear techniques, operated from a vessel. Utilization of fishing techniques varies, depending upon the type of fisheries, and can go from a basic and little hook connected to a line to huge and complex mid water trawls or seines operated by large fishing vessels.The size and autonomy of a fishing vessel is largely determined by its ability to handle, process and store fish in good condition on board, and thus these two characteristics have been greatly influenced by the introduction and utilization of ice and refrigeration machinery. Other technological developments especially hydraulic hauling machinery, fish finding electronics and synthetic twines have also had a major impact on the efficiency and profitability of fishing vessels.A wide variety of fishing gears and practices ranging from small-scale artisanal to advanced mechanised systems are used for fish capture in Kerala. Most important among these fishing gears are trawls, seines, lines, gillnets and entangling nets and traps The modern sector was introduced in 1953 at Neendakara, Shakthikulangara region under the initiative of Indo-Norwegian project (INP). The novel facilities introduced in fishing industry by Indo- Norwegian project accordingly are mechanically operated new boats with new fishing nets. Soon after mechanization, motorization programme gained momentum in Kerala especially in Alleppey, Ernakulam and Kollam districts.
Resumo:
Mangroves are specialised ecosystems developed along estuarine sea coasts and river mouths in tropical and subtropical regions of the world, mainly in the intertidal zone. Hence, the ecosystem and its biological components is under the influence of both marine and freshwater conditions and has developed a set of physiological adaptations to overcome problems of anoxia, salinity and frequent tidal inundations. This has led to the assemblage of a wide variety of plant and animal species of special adaptations suited to the ecosystem. The path of photosynthesis in mangroves is different from other glycophytes. There are modifications or alterations in other physiological processes such as carbohydrate metabolism or polyphenol synthesis. As they survive under extreme conditions of salinity, temperature, tides and anoxic soil conditions they may have chemical compounds, which protect them from these destructive elements. Mangroves are necessarily tolerant of high salt levels and have mechanisms to take up water despite strong osmotic potentials. Some also take up salts, but excrete them through specialised glands in the leaves. Others transfer salts into senescent leaves or store them in the bark or the wood. Still others simply become increasingly conservative in their water use as water salinity increases. A usual transportation or biosynthetic path as other plants cannot be expected in mangrove plants. In India, the states like West Bengal, Orissa, Andhra Pradesh, Tamil Nadu, Andaman and Nicobar Islands, Kerala, Goa, Maharashtra, and Gujarat occupy vast area of mangroves. Kerala has only 6 km2 total mangrove area with Rhizophora apiculata, Rhizophora mucronata, Bruguiera gymnorrhiza, Bruguiera cylindrica, Avicennia officinalis, Sonneratia caseolaris, Sonneratia apetala and Kandelia candal, as the important species present, most of which belong to the family Rhizophoraceae.Rhizophoraceae mangroves are ranked as “major elements of mangroves” as they give the real shape of this unique and interesting ecosystem and these mangrove species most productive and typical characteristic ecosystem of World renowned. It was found that the Rhizophoraceae mangrove extracts exhibit several bioactive properties. Various parts of these mangroves are used in ethnomedicinal practices. Even though extracts from these mangroves possess therapeutic activity against humans, animal and plant pathogens, the specific metabolites responsible for these bioactivities remains to be elucidated. Various parts of these mangroves are used in ethnomedicinal practices. There is a gap of information towards the chemistry of Rhizophoraceae mangroves from Kerala. Thorough phytochemical investigation can achieve the validity of ethnomedicines as well as apply the use of mangrove plants in the development of new drugs. Such studies can pave a firm base for their use in biomarker and chemotaxonomic studies as well as for the better management of the existing mangrove ecosystem. In this study, the various chemical parameters including minerals, biochemical components, bioactive and biomarker molecules were used to classify and assess the possible potentials of the mangrove plants of the true mangrove family Rhizophoraceae from Kochi.
Resumo:
With this document, we provide a compilation of in-depth discussions on some of the most current security issues in distributed systems. The six contributions have been collected and presented at the 1st Kassel Student Workshop on Security in Distributed Systems (KaSWoSDS’08). We are pleased to present a collection of papers not only shedding light on the theoretical aspects of their topics, but also being accompanied with elaborate practical examples. In Chapter 1, Stephan Opfer discusses Viruses, one of the oldest threats to system security. For years there has been an arms race between virus producers and anti-virus software providers, with no end in sight. Stefan Triller demonstrates how malicious code can be injected in a target process using a buffer overflow in Chapter 2. Websites usually store their data and user information in data bases. Like buffer overflows, the possibilities of performing SQL injection attacks targeting such data bases are left open by unwary programmers. Stephan Scheuermann gives us a deeper insight into the mechanisms behind such attacks in Chapter 3. Cross-site scripting (XSS) is a method to insert malicious code into websites viewed by other users. Michael Blumenstein explains this issue in Chapter 4. Code can be injected in other websites via XSS attacks in order to spy out data of internet users, spoofing subsumes all methods that directly involve taking on a false identity. In Chapter 5, Till Amma shows us different ways how this can be done and how it is prevented. Last but not least, cryptographic methods are used to encode confidential data in a way that even if it got in the wrong hands, the culprits cannot decode it. Over the centuries, many different ciphers have been developed, applied, and finally broken. Ilhan Glogic sketches this history in Chapter 6.
Resumo:
The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.
Resumo:
Die Auszeichnungssprache XML dient zur Annotation von Dokumenten und hat sich als Standard-Datenaustauschformat durchgesetzt. Dabei entsteht der Bedarf, XML-Dokumente nicht nur als reine Textdateien zu speichern und zu transferieren, sondern sie auch persistent in besser strukturierter Form abzulegen. Dies kann unter anderem in speziellen XML- oder relationalen Datenbanken geschehen. Relationale Datenbanken setzen dazu bisher auf zwei grundsätzlich verschiedene Verfahren: Die XML-Dokumente werden entweder unverändert als binäre oder Zeichenkettenobjekte gespeichert oder aber aufgespalten, sodass sie in herkömmlichen relationalen Tabellen normalisiert abgelegt werden können (so genanntes „Flachklopfen“ oder „Schreddern“ der hierarchischen Struktur). Diese Dissertation verfolgt einen neuen Ansatz, der einen Mittelweg zwischen den bisherigen Lösungen darstellt und die Möglichkeiten des weiterentwickelten SQL-Standards aufgreift. SQL:2003 definiert komplexe Struktur- und Kollektionstypen (Tupel, Felder, Listen, Mengen, Multimengen), die es erlauben, XML-Dokumente derart auf relationale Strukturen abzubilden, dass der hierarchische Aufbau erhalten bleibt. Dies bietet zwei Vorteile: Einerseits stehen bewährte Technologien, die aus dem Bereich der relationalen Datenbanken stammen, uneingeschränkt zur Verfügung. Andererseits lässt sich mit Hilfe der SQL:2003-Typen die inhärente Baumstruktur der XML-Dokumente bewahren, sodass es nicht erforderlich ist, diese im Bedarfsfall durch aufwendige Joins aus den meist normalisierten und auf mehrere Tabellen verteilten Tupeln zusammenzusetzen. In dieser Arbeit werden zunächst grundsätzliche Fragen zu passenden, effizienten Abbildungsformen von XML-Dokumenten auf SQL:2003-konforme Datentypen geklärt. Darauf aufbauend wird ein geeignetes, umkehrbares Umsetzungsverfahren entwickelt, das im Rahmen einer prototypischen Applikation implementiert und analysiert wird. Beim Entwurf des Abbildungsverfahrens wird besonderer Wert auf die Einsatzmöglichkeit in Verbindung mit einem existierenden, ausgereiften relationalen Datenbankmanagementsystem (DBMS) gelegt. Da die Unterstützung von SQL:2003 in den kommerziellen DBMS bisher nur unvollständig ist, muss untersucht werden, inwieweit sich die einzelnen Systeme für das zu implementierende Abbildungsverfahren eignen. Dabei stellt sich heraus, dass unter den betrachteten Produkten das DBMS IBM Informix die beste Unterstützung für komplexe Struktur- und Kollektionstypen bietet. Um die Leistungsfähigkeit des Verfahrens besser beurteilen zu können, nimmt die Arbeit Untersuchungen des nötigen Zeitbedarfs und des erforderlichen Arbeits- und Datenbankspeichers der Implementierung vor und bewertet die Ergebnisse.
Resumo:
Die Arbeit behandelt im Rahmen eines induktiven Ansatzes die Problematik aktueller kommunalpolitischer Zielkonflikte im Umgang mit Baudenkmalen in Deutschland. Dabei wird das Politikfeld Denkmalschutz in seiner kulturell-politischen Mehrdimensionalität unter der Ausgangsfrage untersucht, wie Entscheidungsprozesse verlaufen, bei denen entwicklungsbezogene Interessen und Belange des Denkmalschutzes eine besondere Rolle spielen. Vier Beispielfälle bilden den empirischen Kern der Untersuchung: Ein ortsbildprägendes und architektonisch qualitätsvolles Industriedenkmal wandelt sich mittels staatlicher Förderung zu einer Brachfläche; der Umgebungsschutz eines Gartendenkmals von Weltrang muss den Bedürfnissen des kommerzialisierten Fußballsports den Vortritt lassen; ein historisches Lichtspieltheater wird trotz Massenprotesten von Bürgern zu einem Buchladen umgebaut; eine freistehende Gründerzeitvilla wird unter der Maßgabe maximaler Verkaufsflächengröße durch ein Einkaufszentrum eingehaust. Aufbauend auf einer Analyse der jeweiligen Entscheidungsprozesse werden die Spezifika politischer Auseinandersetzungen um Denkmale fallübergreifend herausgearbeitet. Das Untersuchungsprinzip entspricht einem explorativen Verfahren, wobei der argumentative Austausch als empirischer Schlüssel zu sprachlich materialisierten Deutungsangeboten von Akteuren einen Schwerpunkt der Untersuchung bildet. In der Gegenüberstellung diskursiver Prozesse wird untersucht, wie Deutungsangebote im politischen Prozess entstehen, sich verändern und diskursiv vermittelt werden. Im Mittelpunkt steht der Einblick in das Zusammenspiel empirisch bestimmter Einflussgrößen. Dabei kristallisieren sich mehrere Thesen heraus, die das kulturelle Verständnis, die Rolle des institutionellen Kontextes und die politische Aushandlung als Prozess betreffen. Es wird aufgezeigt, weshalb die Kluft zwischen dem elitären Erhaltungsinteresse der Fachwelt und dem Denkmalverständnis des „Durchschnittsbürgers" als notwendige Triebfeder der denkmalpflegerischen Vermittlungsarbeit und für eine kreative Auseinandersetzung mit dem Denkmal ebenso wie der hoheitliche Denkmalschutz unverzichtbar bleibt.
Resumo:
We design and implement a system that recommends musicians to listeners. The basic idea is to keep track of what artists a user listens to, to find other users with similar tastes, and to recommend other artists that these similar listeners enjoy. The system utilizes a client-server architecture, a web-based interface, and an SQL database to store and process information. We describe Audiomomma-0.3, a proof-of-concept implementation of the above ideas.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
Tanto los Sistemas de Información Geográfica como la Recuperación de Información han sido campos de investigación muy importantes en las últimas décadas. Recientemente, un nuevo campo de investigación llamado Recuperación de Información Geográfica ha surgido fruto de la confluencia de estos dos campos. El objetivo principal de este campo es definir estructuras de indexación y técnicas para almacenar y recuperar documentos de manera eficiente empleando tanto las referencias textuales como las referencias geográficas contenidas en el texto. En este artículo presentamos la arquitectura de un sistema para recuperación de información geográfica y definimos el flujo de trabajo para la extracción de las referencias geográficas de los documentos. Presentamos además una nueva estructura de indexación que combina un índice invertido, un índice espacial y una ontología. Esta estructura mejora las capacidades de consulta de otras propuestas
Resumo:
This paper provides a preliminary formulation of a new currency based on knowledge. Through a literature review of alternative currencies, various properties and benefits are selected that we hope will enable such a currency to be created. Nowadays not only money but also knowledge is necessary to do business. For instance, knowledge about markets and consumers is highly valuable but difficult to achieve, and even more difficult to store, transport or trade. The basic premise of this proposal is a knowledge measurement pattern that is formulated as a new alternative social currency. Therefore, it is an additional means of contributing to the worldwide evolution of a knowledge society. It is intended as a currency to facilitate the conservation and storage of knowledge, and its organization and categorization, but mainly its exploitation and transference