590 resultados para Automatized Indexing
Resumo:
Background Australian policy mandates consumer and carer participation in mental health services at all levels including research. Inspired by a UK model - Service Users Group Advising on Research [SUGAR] - we conducted a scoping project in 2013 with a view to create a consumer and carer led research process that moves beyond stigma and tokenism, that values the unique knowledge of lived experience and leads to people being treated better when accessing services. This poster presents the initial findings. Aims The project’s purpose was to explore with consumers, consumer companions and carers at the Metro North Mental Health-RBWH their interest in and views about research partnerships with academic and clinical colleagues. Methods This poster overviews the initial findings from three audio-recorded focus groups conducted with a total of 14 consumers, carers and consumer companions at the Brisbane site. Analysis Our work was guided by framework analysis (Gale et al. 2013). It defines 5 steps for analysing narrative data: familiarising; development of categories; indexing; charting and interpretation. Eight main ideas were initially developed and were divided between the authors to further index. This process identified 37 related analytic ideas. The authors integrated these by combining, removing and redefining them by consensus though a mapping process. The final step is the return of the analysis to the participants for feedback and input into the interpretation of the focus group discussions. Results 1. Value & Respect: Feeling Valued & Respected, Tokenism, Stigma, Governance, Valuing prior knowledge / background 2. Pathways to Knowledge and Involvement in Research: ‘Where to begin’, Support, Unity & partnership, Communication, Co-ordination, Flexibility due to fluctuating capacity 3. Personal Context: Barriers regarding Commitments & the nature of mental illness, Wellbeing needs, Prior experience of research, Motivators, Attributes 4. What is research? Developing Knowledge, What to do research on, how and why? Conclusion and Discussion Initial analysis suggests that participants saw potential for ‘amazing things’ in mental health research such as reflecting their priorities and moving beyond stigma and tokenism. The main needs identified were education, mentoring, funding support and research processes that fitted consumers’ and carers’limitations and fluctuating capacities. They identified maintaining motivation and interest as an issue since research processes are often extended by ethics and funding applications. Participants felt that consumer and carer led research would value the unique knowledge that the lived experience of consumers and carers brings and lead to people being treated better when accessing services.
Resumo:
Segmentation is a data mining technique yielding simplified representations of sequences of ordered points. A sequence is divided into some number of homogeneous blocks, and all points within a segment are described by a single value. The focus in this thesis is on piecewise-constant segments, where the most likely description for each segment and the most likely segmentation into some number of blocks can be computed efficiently. Representing sequences as segmentations is useful in, e.g., storage and indexing tasks in sequence databases, and segmentation can be used as a tool in learning about the structure of a given sequence. The discussion in this thesis begins with basic questions related to segmentation analysis, such as choosing the number of segments, and evaluating the obtained segmentations. Standard model selection techniques are shown to perform well for the sequence segmentation task. Segmentation evaluation is proposed with respect to a known segmentation structure. Applying segmentation on certain features of a sequence is shown to yield segmentations that are significantly close to the known underlying structure. Two extensions to the basic segmentation framework are introduced: unimodal segmentation and basis segmentation. The former is concerned with segmentations where the segment descriptions first increase and then decrease, and the latter with the interplay between different dimensions and segments in the sequence. These problems are formally defined and algorithms for solving them are provided and analyzed. Practical applications for segmentation techniques include time series and data stream analysis, text analysis, and biological sequence analysis. In this thesis segmentation applications are demonstrated in analyzing genomic sequences.
Resumo:
Employees and students in University of Helsinki use various services which require authentication. Some of these services require strong authentication. Traditionally this has been realized by meeting in person and presenting an official identification card. Some of these online services can be automatized by implementing existing techniques for strong authentication. Currently strong authentication is implemented by VETUMA-service. Mobile authentication is interesting alternative method. The purpose of this paper is to study the Mobile Signature Service technology and to find out the benefits and possibilities of its use for mobile authentication in University of Helsinki. Mobile authentication is suitable method for implementing strong authentication and for signing documents digitally. Mobile authentication can be used in many different ways in Helsinki university.
Resumo:
Current smartphones have a storage capacity of several gigabytes. More and more information is stored on mobile devices. To meet the challenge of information organization, we turn to desktop search. Users often possess multiple devices, and synchronize (subsets of) information between them. This makes file synchronization more important. This thesis presents Dessy, a desktop search and synchronization framework for mobile devices. Dessy uses desktop search techniques, such as indexing, query and index term stemming, and search relevance ranking. Dessy finds files by their content, metadata, and context information. For example, PDF files may be found by their author, subject, title, or text. EXIF data of JPEG files may be used in finding them. User–defined tags can be added to files to organize and retrieve them later. Retrieved files are ranked according to their relevance to the search query. The Dessy prototype uses the BM25 ranking function, used widely in information retrieval. Dessy provides an interface for locating files for both users and applications. Dessy is closely integrated with the Syxaw file synchronizer, which provides efficient file and metadata synchronization, optimizing network usage. Dessy supports synchronization of search results, individual files, and directory trees. It allows finding and synchronizing files that reside on remote computers, or the Internet. Dessy is designed to solve the problem of efficient mobile desktop search and synchronization, also supporting remote and Internet search. Remote searches may be carried out offline using a downloaded index, or while connected to the remote machine on a weak network. To secure user data, transmissions between the Dessy client and server are encrypted using symmetric encryption. Symmetric encryption keys are exchanged with RSA key exchange. Dessy emphasizes extensibility. Also the cryptography can be extended. Users may tag their files with context tags and control custom file metadata. Adding new indexed file types, metadata fields, ranking methods, and index types is easy. Finding files is done with virtual directories, which are views into the user’s files, browseable by regular file managers. On mobile devices, the Dessy GUI provides easy access to the search and synchronization system. This thesis includes results of Dessy synchronization and search experiments, including power usage measurements. Finally, Dessy has been designed with mobility and device constraints in mind. It requires only MIDP 2.0 Mobile Java with FileConnection support, and Java 1.5 on desktop machines.
Resumo:
Automatic identification of software faults has enormous practical significance. This requires characterizing program execution behavior and the use of appropriate data mining techniques on the chosen representation. In this paper, we use the sequence of system calls to characterize program execution. The data mining tasks addressed are learning to map system call streams to fault labels and automatic identification of fault causes. Spectrum kernels and SVM are used for the former while latent semantic analysis is used for the latter The techniques are demonstrated for the intrusion dataset containing system call traces. The results show that kernel techniques are as accurate as the best available results but are faster by orders of magnitude. We also show that latent semantic indexing is capable of revealing fault-specific features.
Resumo:
A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O(N log N) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O(N log σ) bits, where σ is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N / n. We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.
Resumo:
One of the effects of the Internet is that the dissemination of scientific publications in a few years has migrated to electronic formats. The basic business practices between libraries and publishers for selling and buying the content, however, have not changed much. In protest against the high subscription prices of mainstream publishers, scientists have started Open Access (OA) journals and e-print repositories, which distribute scientific information freely. Despite widespread agreement among academics that OA would be the optimal distribution mode for publicly financed research results, such channels still constitute only a marginal phenomenon in the global scholarly communication system. This paper discusses, in view of the experiences of the last ten years, the many barriers hindering a rapid proliferation of Open Access. The discussion is structured according to the main OA channels; peer-reviewed journals for primary publishing, subject- specific and institutional repositories for secondary parallel publishing. It also discusses the types of barriers, which can be classified as consisting of the legal framework, the information technology infrastructure, business models, indexing services and standards, the academic reward system, marketing, and critical mass.
Resumo:
The variety of electron diffraction patterns arising from the decagonal phase has been explored using a stereographic analysis for generating the important zone axes as intersection points corresponding to important relvectors. An indexing scheme employing a set of five vectors and an orthogonal vector has been followed. A systematic tilting from the decagonal axis to one of the twofold axes has been adopted to generate a set of experimental diffraction patterns corresponding to the expected patterns from the stereographic analysis with excellent agreement.
Resumo:
This paper discusses a method for scaling SVM with Gaussian kernel function to handle large data sets by using a selective sampling strategy for the training set. It employs a scalable hierarchical clustering algorithm to construct cluster indexing structures of the training data in the kernel induced feature space. These are then used for selective sampling of the training data for SVM to impart scalability to the training process. Empirical studies made on real world data sets show that the proposed strategy performs well on large data sets.
Resumo:
Following the discovery of two dimensional quasicrystals in rapidly solidified Al-Mn alloys by us and L. Bendersky in 1985, a number of fascinating studies has been conducted to unravel the atomic configuration of quasicrystals with decagonal symmetry. A comprehensive mapping of the reciprocal space of decagonal quasicrystals is now available. The interpretation of the diffraction patterns brings out the comparative advantages of various indexing schemes. In addition, the nature of the variable periodicity can be addressed as a form of polytypism. The relation between decagonal quasicrystals and their crystalline homologues will be explored with emphasis on Al60Mn11Ni4 and 'Al3Mn'. It will also be shown that decagonal quasicrystals are closely related to icosahedral quasicrystals, icosahedral twins and vacancy ordered phases.
Resumo:
We describe a compiler for the Flat Concurrent Prolog language on a message passing multiprocessor architecture. This compiler permits symbolic and declarative programming in the syntax of Guarded Horn Rules, The implementation has been verified and tested on the 64-node PARAM parallel computer developed by C-DAC (Centre for the Development of Advanced Computing, India), Flat Concurrent Prolog (FCP) is a logic programming language designed for concurrent programming and parallel execution, It is a process oriented language, which embodies dataflow synchronization and guarded-command as its basic control mechanisms. An identical algorithm is executed on every processor in the network, We assume regular network topologies like mesh, ring, etc, Each node has a local memory, The algorithm comprises of two important parts: reduction and communication, The most difficult task is to integrate the solutions of problems that arise in the implementation in a coherent and efficient manner. We have tested the efficacy of the compiler on various benchmark problems of the ICOT project that have been reported in the recent book by Evan Tick, These problems include Quicksort, 8-queens, and Prime Number Generation, The results of the preliminary tests are favourable, We are currently examining issues like indexing and load balancing to further optimize our compiler.
Resumo:
Purpose - There are many library automation packages available as open-source software, comprising two modules: staff-client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full-text searching. Most of the available open-source digital library software facilitates indexing and searching of full-text documents in different formats. This paper makes an effort to enable full-text search features in the widely used open-source library automation package Koha, by integrating it with two open-source digital library software packages, Greenstone Digital Library Software (GSDL) and Fedora Generic Search Service (FGSS), independently. Design/methodology/approach - The implementation is done by making use of the Search and Retrieval by URL (SRU) feature available in Koha, GSDL and FGSS. The full-text documents are indexed both in Koha and GSDL and FGSS. Findings - Full-text searching capability in Koha is achieved by integrating either GSDL or FGSS into Koha and by passing an SRU request to GSDL or FGSS from Koha. The full-text documents are indexed both in the library automation package (Koha) and digital library software (GSDL, FGSS) Originality/value - This is the first implementation enabling the full-text search feature in a library automation software by integrating it into digital library software.
Resumo:
GaAs/Ge heterostructures having abrupt interfaces were grown on 2degrees, 6degrees, and 9degrees off-cut Ge substrates and investigated by cross-sectional high-resolution transmission electron microscopy (HRTEM), scanning electron microscopy, photoluminescence spectroscopy and electrochemical capacitance voltage (ECV) profiler. The GaAs films were grown on off-oriented Ge substrates with growth temperature in the range of 600-700degreesC, growth rate of 3-12 mum/hr and a V/III ratio of 29-88. The lattice indexing of HRTEM exhibits an excellent lattice line matching between GaAs and Ge substrate. The PL spectra from GaAs layer on 6degrees off-cut Ge substrate shows the higher excitonic peak compared with 2degrees and 9degrees off-cut Ge substrates. In addition, the luminescence intensity from the GaAs solar cell grown on 6degrees off-cut is higher than on 9degrees off-cut Ge substrates and signifies the potential use of 6degrees off-cut Ge substrate in the GaAs solar cells industry. The ECV profiling shows an abrupt film/substrate interface as well as between various layers of the solar cell structures.
Resumo:
Large instruction windows and issue queues are key to exploiting greater instruction level parallelism in out-of-order superscalar processors. However, the cycle time and energy consumption of conventional large monolithic issue queues are high. Previous efforts to reduce cycle time segment the issue queue and pipeline wakeup. Unfortunately, this results in significant IPC loss. Other proposals which address energy efficiency issues by avoiding only the unnecessary tag-comparisons do not reduce broadcasts. These schemes also increase the issue latency.To address both these issues comprehensively, we propose the Scalable Lowpower Issue Queue (SLIQ). SLIQ augments a pipelined issue queue with direct indexing to mitigate the problem of delayed wakeups while reducing the cycle time. Also, the SLIQ design naturally leads to significant energy savings by reducing both the number of tag broadcasts and comparisons required.A 2 segment SLIQ incurs an average IPC loss of 0.2% over the entire SPEC CPU2000 suite, while achieving a 25.2% reduction in issue latency when compared to a monolithic 128-entry issue queue for an 8-wide superscalar processor. An 8 segment SLIQ improves scalability by reducing the issue latency by 38.3% while incurring an IPC loss of only 2.3%. Further, the 8 segment SLIQ significantly reduces the energy consumption and energy-delay product by 48.3% and 67.4% respectively on average.