946 resultados para EBWorld, Java, Offline, XML, GIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Javanese Performances on an Indonesian Stage: Contesting Culture, Embracing Change, is Barbara Hatley’s first book about the performing arts in Indonesia, a topic that piqued her interest while undergoing a masters program at Yale University in the late 1960s. In this sense, it is a landmark study, for Hatley has since become very well known in Indonesianist circles, especially among those with an interest in matters of culture, popular and elite. Until recently, her writings on Indonesian performing arts have only been available in the form of journal articles and book chapters...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work investigates the feasibly in using a low noise “C” Band block down-converter as a Ultra High Frequency window coupler for the detection of partial discharge activity from free conducting practices and a protrusion on the high voltage conductor in Gas Insulated Switchgear. The investigated window coupler has a better sensitivity than the internal Ultra High Frequency couplers fitted to the system. The investigated window couplers however are sensitive to changes in the frequency content of the discharge signals and appear to be less sensitive to negative discharges signals produced by a protrusion than the positive discharge signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two examples of GIS-based multiple-criteria evaluations of plantation forests are presented. These desktop assessments use available topographical, geological and pedological information to establish the risk of occurrence of certain environmentally detrimental processes. The first case study is concerned with the risk that chemical additives (i.e. simazine) applied within the forestry landscape may reach the drainage system. The second case study assesses the vulnerability of forested areas to landslides. The subject of the first multiple-criteria evaluation (MCE) was a 4 km2 logging area, which had been recently site-prepared for a Pinus plantation. The criteria considered relevant to the assessment were proximity to creeks, slope, soil depth to the restrictive layer (i.e. potential depth to a perched water table) and soil erodability (based on clay content). The output of the MCE was in accordance with field observations, showing that this approach has the potential to provide management support by highlighting areas vulnerable to waterlogging, which in turn can trigger overland flow and export of pollutants to the local stream network. The subject of the second evaluation was an Araucaria plantation which is prone to landslips during heavy rain. The parameters included in the assessment were drainage system, the slope of the terrain and geological features such as rocks and structures. A good correlation between the MCE results and field observations was found, suggesting that this GIS approach is useful for the assessment of natural hazards. Multiple-criteria evaluations are highly flexible as they can be designed in either vector or raster format, depending on the type of available data. Although tested on specific areas, the MCEs presented here can be easily used elsewhere and assist both management intervention and the protection of the adjacent environment by assessing the vulnerability of the forest landscape to either introduced chemicals or natural hazards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents novel modelling applications for environmental geospatial data using remote sensing, GIS and statistical modelling techniques. The studied themes can be classified into four main themes: (i) to develop advanced geospatial databases. Paper (I) demonstrates the creation of a geospatial database for the Glanville fritillary butterfly (Melitaea cinxia) in the Åland Islands, south-western Finland; (ii) to analyse species diversity and distribution using GIS techniques. Paper (II) presents a diversity and geographical distribution analysis for Scopulini moths at a world-wide scale; (iii) to study spatiotemporal forest cover change. Paper (III) presents a study of exotic and indigenous tree cover change detection in Taita Hills Kenya using airborne imagery and GIS analysis techniques; (iv) to explore predictive modelling techniques using geospatial data. In Paper (IV) human population occurrence and abundance in the Taita Hills highlands was predicted using the generalized additive modelling (GAM) technique. Paper (V) presents techniques to enhance fire prediction and burned area estimation at a regional scale in East Caprivi Namibia. Paper (VI) compares eight state-of-the-art predictive modelling methods to improve fire prediction, burned area estimation and fire risk mapping in East Caprivi Namibia. The results in Paper (I) showed that geospatial data can be managed effectively using advanced relational database management systems. Metapopulation data for Melitaea cinxia butterfly was successfully combined with GPS-delimited habitat patch information and climatic data. Using the geospatial database, spatial analyses were successfully conducted at habitat patch level or at more coarse analysis scales. Moreover, this study showed it appears evident that at a large-scale spatially correlated weather conditions are one of the primary causes of spatially correlated changes in Melitaea cinxia population sizes. In Paper (II) spatiotemporal characteristics of Socupulini moths description, diversity and distribution were analysed at a world-wide scale and for the first time GIS techniques were used for Scopulini moth geographical distribution analysis. This study revealed that Scopulini moths have a cosmopolitan distribution. The majority of the species have been described from the low latitudes, sub-Saharan Africa being the hot spot of species diversity. However, the taxonomical effort has been uneven among biogeographical regions. Paper III showed that forest cover change can be analysed in great detail using modern airborne imagery techniques and historical aerial photographs. However, when spatiotemporal forest cover change is studied care has to be taken in co-registration and image interpretation when historical black and white aerial photography is used. In Paper (IV) human population distribution and abundance could be modelled with fairly good results using geospatial predictors and non-Gaussian predictive modelling techniques. Moreover, land cover layer is not necessary needed as a predictor because first and second-order image texture measurements derived from satellite imagery had more power to explain the variation in dwelling unit occurrence and abundance. Paper V showed that generalized linear model (GLM) is a suitable technique for fire occurrence prediction and for burned area estimation. GLM based burned area estimations were found to be more superior than the existing MODIS burned area product (MCD45A1). However, spatial autocorrelation of fires has to be taken into account when using the GLM technique for fire occurrence prediction. Paper VI showed that novel statistical predictive modelling techniques can be used to improve fire prediction, burned area estimation and fire risk mapping at a regional scale. However, some noticeable variation between different predictive modelling techniques for fire occurrence prediction and burned area estimation existed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low Level Virtual Machine (LLVM) on moderni koko ohjelman elinkaaren optimointeihin keskittyvä kääntäjäarkkitehtuuri. Java-virtuaalikone on puolestaan suosittu korkean tason virtuaalikone, johon monien ohjelmointikielten toteutus nykyään perustuu. Tutkielmassa esitellään alun perin suorituskykyisen C- ja C++-kääntäjän toteuttamiseksi luotu LLVM-järjestelmä ja arvioidaan, miten hyvin LLVM-infrastruktuuri tukee Java-virtuaalikoneen toteuttamista. Tämän lisäksi tutkielmassa pohditaan, miten dynaamisten kielten usein tarvitsemaa suoritusaikaista ja lähdekieliriippuvaista optimointia voidaan tukea lähdekieliriippumattomassa LLVM-järjestelmässä. Lopuksi tutkielmassa esitellään kehitysehdotelma yleisen roskienkeruuinfrastruktuurin toteuttamiseksi LLVM:ssä, mikä tukisi dynaamista muistia automaattisesti hallitsevien kielten, kuten Javan ja sen virtuaalikoneen toteuttamista.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

XML documents are becoming more and more common in various environments. In particular, enterprise-scale document management is commonly centred around XML, and desktop applications as well as online document collections are soon to follow. The growing number of XML documents increases the importance of appropriate indexing methods and search tools in keeping the information accessible. Therefore, we focus on content that is stored in XML format as we develop such indexing methods. Because XML is used for different kinds of content ranging all the way from records of data fields to narrative full-texts, the methods for Information Retrieval are facing a new challenge in identifying which content is subject to data queries and which should be indexed for full-text search. In response to this challenge, we analyse the relation of character content and XML tags in XML documents in order to separate the full-text from data. As a result, we are able to both reduce the size of the index by 5-6\% and improve the retrieval precision as we select the XML fragments to be indexed. Besides being challenging, XML comes with many unexplored opportunities which are not paid much attention in the literature. For example, authors often tag the content they want to emphasise by using a typeface that stands out. The tagged content constitutes phrases that are descriptive of the content and useful for full-text search. They are simple to detect in XML documents, but also possible to confuse with other inline-level text. Nonetheless, the search results seem to improve when the detected phrases are given additional weight in the index. Similar improvements are reported when related content is associated with the indexed full-text including titles, captions, and references. Experimental results show that for certain types of document collections, at least, the proposed methods help us find the relevant answers. Even when we know nothing about the document structure but the XML syntax, we are able to take advantage of the XML structure when the content is indexed for full-text search.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current smartphones have a storage capacity of several gigabytes. More and more information is stored on mobile devices. To meet the challenge of information organization, we turn to desktop search. Users often possess multiple devices, and synchronize (subsets of) information between them. This makes file synchronization more important. This thesis presents Dessy, a desktop search and synchronization framework for mobile devices. Dessy uses desktop search techniques, such as indexing, query and index term stemming, and search relevance ranking. Dessy finds files by their content, metadata, and context information. For example, PDF files may be found by their author, subject, title, or text. EXIF data of JPEG files may be used in finding them. User–defined tags can be added to files to organize and retrieve them later. Retrieved files are ranked according to their relevance to the search query. The Dessy prototype uses the BM25 ranking function, used widely in information retrieval. Dessy provides an interface for locating files for both users and applications. Dessy is closely integrated with the Syxaw file synchronizer, which provides efficient file and metadata synchronization, optimizing network usage. Dessy supports synchronization of search results, individual files, and directory trees. It allows finding and synchronizing files that reside on remote computers, or the Internet. Dessy is designed to solve the problem of efficient mobile desktop search and synchronization, also supporting remote and Internet search. Remote searches may be carried out offline using a downloaded index, or while connected to the remote machine on a weak network. To secure user data, transmissions between the Dessy client and server are encrypted using symmetric encryption. Symmetric encryption keys are exchanged with RSA key exchange. Dessy emphasizes extensibility. Also the cryptography can be extended. Users may tag their files with context tags and control custom file metadata. Adding new indexed file types, metadata fields, ranking methods, and index types is easy. Finding files is done with virtual directories, which are views into the user’s files, browseable by regular file managers. On mobile devices, the Dessy GUI provides easy access to the search and synchronization system. This thesis includes results of Dessy synchronization and search experiments, including power usage measurements. Finally, Dessy has been designed with mobility and device constraints in mind. It requires only MIDP 2.0 Mobile Java with FileConnection support, and Java 1.5 on desktop machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, XML has been accepted as the format of messages for several applications. Prominent examples include SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This XML usage is understandable, as the format itself is a well-accepted standard for structured data, and it has excellent support for many popular programming languages, so inventing an application-specific format no longer seems worth the effort. Simultaneously with this XML's rise to prominence there has been an upsurge in the number and capabilities of various mobile devices. These devices are connected through various wireless technologies to larger networks, and a goal of current research is to integrate them seamlessly into these networks. These two developments seem to be at odds with each other. XML as a fully text-based format takes up more processing power and network bandwidth than binary formats would, whereas the battery-powered nature of mobile devices dictates that energy, both in processing and transmitting, be utilized efficiently. This thesis presents the work we have performed to reconcile these two worlds. We present a message transfer service that we have developed to address what we have identified as the three key issues: XML processing at the application level, a more efficient XML serialization format, and the protocol used to transfer messages. Our presentation includes both a high-level architectural view of the whole message transfer service, as well as detailed descriptions of the three new components. These components consist of an API, and an associated data model, for XML processing designed for messaging applications, a binary serialization format for the data model of the API, and a message transfer protocol providing two-way messaging capability with support for client mobility. We also present relevant performance measurements for the service and its components. As a result of this work, we do not consider XML to be inherently incompatible with mobile devices. As the fixed networking world moves toward XML for interoperable data representation, so should the wireless world also do to provide a better-integrated networking infrastructure. However, the problems that XML adoption has touch all of the higher layers of application programming, so instead of concentrating simply on the serialization format we conclude that improvements need to be made in an integrated fashion in all of these layers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on social network sites has examined how people integrate offline and online life, but with a particular emphasis on their use by friendship groups. We extend earlier work by examining a case in which offline ties are non-existent, but online ties strong. Our case is a study of bodybuilders, who explore their passion with like-minded offline 'strangers' in tightly integrated online communities. We show that the integration of offline and online life supports passion-centric activities, such as bodybuilding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context sensitive pointer analyses based on Whaley and Lam’s bddbddb system have been shown to scale to large Java programs. We provide a technique to incorporate flow sensitivity for Java fields into one such analysis and obtain an escape analysis based on it. First, we express an intraprocedural field flow sensitive analysis, using Fink et al.’s Heap Array SSA form in Datalog. We then extend this analysis interprocedurally by introducing two new φ functions for Heap Array SSA Form and adding deduction rules corresponding to them. Adding a few more rules gives us an escape analysis. We describe two types of field flow sensitivity: partial (PFFS) and full (FFFS), the former without strong updates to fields and the latter with strong updates. We compare these analyses with two different (field flow insensitive) versions of Whaley-Lam analysis: one of which is flow sensitive for locals (FS) and the other, flow insensitive for locals (FIS). We have implemented this analysis on the bddbddb system while using the SOOT open source framework as a front end. We have run our analysis on a set of 15 Java programs. Our experimental results show that the time taken by our field flow sensitive analyses is comparable to that of the field flow insensitive versions while doing much better in some cases. Our PFFS analysis achieves average reductions of about 23% and 30% in the size of the points-to sets at load and store statements respectively and discovers 71% more “caller-captured” objects than FIS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

XML has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Self Adaptive Migration Model Genetic Algorithm (SAMCA)[5] and multi class Support Vector Machine (SVM) are used to learn a user model. Based on the feedback from the users the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.