991 resultados para Data manipulation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years a variety of mobile apps, wearable technologies and embedded systems have emerged that allow individuals to track the amount and the quality of their sleep in their own beds. Despite the widespread adoption of these technologies, little is known about the challenges that current users face in tracking and analysing their sleep. Hence we conducted a qualitative study to examine the practices of current users of sleep tracking technologies and to identify challenges in current practice. Based on data collected from 5 online forums for users of sleep-tracking technologies, we identified 22 different challenges under the following 4 themes: tracking continuity, trust, data manipulation, and data interpretation. Based on these results, we propose 6 design opportunities to assist researchers and practitioners in designing sleep-tracking technologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The recent spurt of research activities in Entity-Relationship Approach to databases calls for a close scrutiny of the semantics of the underlying Entity-Relationship models, data manipulation languages, data definition languages, etc. For reasons well known, it is very desirable and sometimes imperative to give formal description of the semantics. In this paper, we consider a specific ER model, the generalized Entity-Relationship model (without attributes on relationships) and give denotational semantics for the model as well as a simple ER algebra based on the model. Our formalism is based on the Vienna Development Method—the meta language (VDM). We also discuss the salient features of the given semantics in detail and suggest directions for further work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The MLML DBASE family of programs described here provides many of the. algorithms used in oceanographic data reduction, general data manipulation and line graphs. These programs provide a consistent file structure for serial data typically encountered in oceanography. This introduction should provide enough general knowledge to explain the scope of the program and to run the basic MLML_DBASE programs. It is not intended as a programmer's guide. (PDF contains 50 pages)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A data manipulation method has been developed for automatic peak recognition and result evaluation in the analysis of organic chlorinated hydrocarbons with dual-column gas chromatography. Based on the retention times of two internal standards, pentachlorotoluene and decachlorobiphenyl, the retention times of chlorinated hydrocarbons can be calibrated automatically and accurately. It is very convenient to identify the peaks by comparing the retention times of samples with the calibrated retention times calculated from the relative retention indices of standards. Meanwhile, with a suggested two-step evaluation method the evaluation coefficients and the suitable quantitative results of each component can be automatically achieved for practical samples in an analytical system using two columns with different polarities and two internal standards. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have developed a compiler for the lexically-scoped dialect of LISP known as SCHEME. The compiler knows relatively little about specific data manipulation primitives such as arithmetic operators, but concentrates on general issues of environment and control. Rather than having specialized knowledge about a large variety of control and environment constructs, the compiler handles only a small basis set which reflects the semantics of lambda-calculus. All of the traditional imperative constructs, such as sequencing, assignment, looping, GOTO, as well as many standard LISP constructs such as AND, OR, and COND, are expressed in macros in terms of the applicative basis set. A small number of optimization techniques, coupled with the treatment of function calls as GOTO statements, serve to produce code as good as that produced by more traditional compilers. The macro approach enables speedy implementation of new constructs as desired without sacrificing efficiency in the generated code. A fair amount of analysis is devoted to determining whether environments may be stack-allocated or must be heap-allocated. Heap-allocated environments are necessary in general because SCHEME (unlike Algol 60 and Algol 68, for example) allows procedures with free lexically scoped variables to be returned as the values of other procedures; the Algol stack-allocation environment strategy does not suffice. The methods used here indicate that a heap-allocating generalization of the "display" technique leads to an efficient implementation of such "upward funargs". Moreover, compile-time optimization and analysis can eliminate many "funargs" entirely, and so far fewer environment structures need be allocated at run time than might be expected. A subset of SCHEME (rather than triples, for example) serves as the representation intermediate between the optimized SCHEME code and the final output code; code is expressed in this subset in the so-called continuation-passing style. As a subset of SCHEME, it enjoys the same theoretical properties; one could even apply the same optimizer used on the input code to the intermediate code. However, the subset is so chosen that all temporary quantities are made manifest as variables, and no control stack is needed to evaluate it. As a result, this apparently applicative representation admits an imperative interpretation which permits easy transcription to final imperative machine code. These qualities suggest that an applicative language like SCHEME is a better candidate for an UNCOL than the more imperative candidates proposed to date.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Comfort is, in essence, satisfaction with the environment, and with respect to the indoor environment it is primarily satisfaction with the thermal conditions and air quality. Improving comfort has social, health and economic benefits, and is more financially significant than any other building cost. Despite this, comfort is not strictly managed throughout the building lifecycle. This is mainly due to the lack of an appropriate system to adequately manage comfort knowledge through the construction process into operation. Previous proposals to improve knowledge management have not been successfully adopted by the construction industry. To address this, the BabySteps approach was devised. BabySteps is an approach, proposed by this research, which states that for an innovation to be adopted into the industry it must be implementable through a number of small changes. This research proposes that improving the management of comfort knowledge will improve comfort. ComMet is a new methodology proposed by this research that manages comfort knowledge. It enables comfort knowledge to be captured, stored and accessed throughout the building life-cycle and so allowing it to be re-used in future stages of the building project and in future projects. It does this using the following: Comfort Performances – These are simplified numerical representations of the comfort of the indoor environment. Comfort Performances quantify the comfort at each stage of the building life-cycle using standard comfort metrics. Comfort Ratings - These are a means of classifying the comfort conditions of the indoor environment according to an appropriate standard. Comfort Ratings are generated by comparing different Comfort Performances. Comfort Ratings provide additional information relating to the comfort conditions of the indoor environment, which is not readily determined from the individual Comfort Performances. Comfort History – This is a continuous descriptive record of the comfort throughout the project, with a focus on documenting the items and activities, proposed and implemented, which could potentially affect comfort. Each aspect of the Comfort History is linked to the relevant comfort entity it references. These three components create a comprehensive record of the comfort throughout the building lifecycle. They are then stored and made available in a common format in a central location which allows them to be re-used ad infinitum. The LCMS System was developed to implement the ComMet methodology. It uses current and emerging technologies to capture, store and allow easy access to comfort knowledge as specified by ComMet. LCMS is an IT system that is a combination of the following six components: Building Standards; Modelling & Simulation; Physical Measurement through the specially developed Egg-Whisk (Wireless Sensor) Network; Data Manipulation; Information Recording; Knowledge Storage and Access.Results from a test case application of the LCMS system - an existing office room at a research facility - highlighted that while some aspects of comfort were being maintained, the building’s environment was not in compliance with the acceptable levels as stipulated by the relevant building standards. The implementation of ComMet, through LCMS, demonstrates how comfort, typically only considered during early design, can be measured and managed appropriately through systematic application of the methodology as means of ensuring a healthy internal environment in the building.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bien que les champignons soient régulièrement utilisés comme modèle d'étude des systèmes eucaryotes, leurs relations phylogénétiques soulèvent encore des questions controversées. Parmi celles-ci, la classification des zygomycètes reste inconsistante. Ils sont potentiellement paraphylétiques, i.e. regroupent de lignées fongiques non directement affiliées. La position phylogénétique du genre Schizosaccharomyces est aussi controversée: appartient-il aux Taphrinomycotina (précédemment connus comme archiascomycetes) comme prédit par l'analyse de gènes nucléaires, ou est-il plutôt relié aux Saccharomycotina (levures bourgeonnantes) tel que le suggère la phylogénie mitochondriale? Une autre question concerne la position phylogénétique des nucléariides, un groupe d'eucaryotes amiboïdes que l'on suppose étroitement relié aux champignons. Des analyses multi-gènes réalisées antérieurement n'ont pu conclure, étant donné le choix d'un nombre réduit de taxons et l'utilisation de six gènes nucléaires seulement. Nous avons abordé ces questions par le biais d'inférences phylogénétiques et tests statistiques appliqués à des assemblages de données phylogénomiques nucléaires et mitochondriales. D'après nos résultats, les zygomycètes sont paraphylétiques (Chapitre 2) bien que le signal phylogénétique issu du jeu de données mitochondriales disponibles est insuffisant pour résoudre l'ordre de cet embranchement avec une confiance statistique significative. Dans le Chapitre 3, nous montrons à l'aide d'un jeu de données nucléaires important (plus de cent protéines) et avec supports statistiques concluants, que le genre Schizosaccharomyces appartient aux Taphrinomycotina. De plus, nous démontrons que le regroupement conflictuel des Schizosaccharomyces avec les Saccharomycotina, venant des données mitochondriales, est le résultat d'un type d'erreur phylogénétique connu: l'attraction des longues branches (ALB), un artéfact menant au regroupement d'espèces dont le taux d'évolution rapide n'est pas représentatif de leur véritable position dans l'arbre phylogénétique. Dans le Chapitre 4, en utilisant encore un important jeu de données nucléaires, nous démontrons avec support statistique significatif que les nucleariides constituent le groupe lié de plus près aux champignons. Nous confirmons aussi la paraphylie des zygomycètes traditionnels tel que suggéré précédemment, avec support statistique significatif, bien que ne pouvant placer tous les membres du groupe avec confiance. Nos résultats remettent en cause des aspects d'une récente reclassification taxonomique des zygomycètes et de leurs voisins, les chytridiomycètes. Contrer ou minimiser les artéfacts phylogénétiques telle l'attraction des longues branches (ALB) constitue une question récurrente majeure. Dans ce sens, nous avons développé une nouvelle méthode (Chapitre 5) qui identifie et élimine dans une séquence les sites présentant une grande variation du taux d'évolution (sites fortement hétérotaches - sites HH); ces sites sont connus comme contribuant significativement au phénomène d'ALB. Notre méthode est basée sur un test de rapport de vraisemblance (likelihood ratio test, LRT). Deux jeux de données publiés précédemment sont utilisés pour démontrer que le retrait graduel des sites HH chez les espèces à évolution accélérée (sensibles à l'ALB) augmente significativement le support pour la topologie « vraie » attendue, et ce, de façon plus efficace comparée à d'autres méthodes publiées de retrait de sites de séquences. Néanmoins, et de façon générale, la manipulation de données préalable à l'analyse est loin d’être idéale. Les développements futurs devront viser l'intégration de l'identification et la pondération des sites HH au processus d'inférence phylogénétique lui-même.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Konstanz Information Miner is a modular environment which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables easy integration of new algorithms, data manipulation or visualization methods as new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture and briefly sketch how new nodes can be incorporated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper considers the relationship between value management and facilities management. The findings are particularly relevant to large client organisations which procure new buildings on a regular basis. It is argued that the maximum effectiveness of value management can only be achieved if it is used in conjunction with an ongoing commitment to post-occupancy evaluation. SMART value management is seen to provide the means of ensuring that an individual building design is in alignment with the client’s strategic property needs. However, it is also necessary to recognise that an organisation’s strategic property needs will continually be in a state of change. Consequentially, economic and functional under-performance can only be avoided by a regular performance audit of existing property stock in accordance with changing requirements. Such a policy will ensure ongoing competitiveness through organisational learning. While post-occupancy evaluation represents an obvious additional service to be provided by value management consultants, it is vital that the necessary additional skills are acquired. Process management skills and social science research techniques are clearly important. However, there is also a need to improve mechanisms for data manipulation. Success can only be achieved if equal attention is given to issues of process, structure and content.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern database applications are increasingly employing database management systems (DBMS) to store multimedia and other complex data. To adequately support the queries required to retrieve these kinds of data, the DBMS need to answer similarity queries. However, the standard structured query language (SQL) does not provide effective support for such queries. This paper proposes an extension to SQL that seamlessly integrates syntactical constructions to express similarity predicates to the existing SQL syntax and describes the implementation of a similarity retrieval engine that allows posing similarity queries using the language extension in a relational DBM. The engine allows the evaluation of every aspect of the proposed extension, including the data definition language and data manipulation language statements, and employs metric access methods to accelerate the queries. Copyright (c) 2008 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although the development of geographic information system (GIS) technology and digital data manipulation techniques has enabled practitioners in the geographical and geophysical sciences to make more efficient use of resource information, many of the methods used in forming spatial prediction models are still inherently based on traditional techniques of map stacking in which layers of data are combined under the guidance of a theoretical domain model. This paper describes a data-driven approach by which Artificial Neural Networks (ANNs) can be trained to represent a function characterising the probability that an instance of a discrete event, such as the presence of a mineral deposit or the sighting of an endangered animal species, will occur over some grid element of the spatial area under consideration. A case study describes the application of the technique to the task of mineral prospectivity mapping in the Castlemaine region of Victoria using a range of geological, geophysical and geochemical input variables. Comparison of the maps produced using neural networks with maps produced using a density estimation-based technique demonstrates that the maps can reliably be interpreted as representing probabilities. However, while the neural network model and the density estimation-based model yield similar results under an appropriate choice of values for the respective parameters, the neural network approach has several advantages, especially in high dimensional input spaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Control of tele-operated remote robot’s is nothing new; the public was introduced to this 'new' field in 1986 when the Chernobyl cleanup began. Pictures of weird and wonderful robotic workers pouring concrete or moving rubble flooded the world. Integration of force feedback or 'haptics' to remote robot's is a new development and one that is likely to make a big difference in man-machine interaction. Development of haptic capable tele-operation schema is a challenge. Often platform specific software is developed for one off tasks. This research focussed on the development of an open software platform for haptic control of multiple remote robotic platforms. The software utilises efficient server/client architecture for low data latency, while efficiently performing required kinematic transforms and data manipulation in real time. A description of the algorithm, software interface and hardware is presented in this paper. Preliminary results are encouraging as haptic control has been shown to greatly enhances remote positioning tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis describes the research undertaken for a degree of Master of Science in a retrospective study of airborne remotely sensed data registered in 1990 and 1993, and field captured data of aquatic humus concentrations for ~ 45 lakes in Tasmania. The aim was to investigate and describe the relationship between the remotely sensed data and the field data and to test the hypothesis that the remotely sensed data would establish further evidence of a limnological corridor of change running north-west to south- east. The airborne remotely sensed data consisted of data captured by the CSIRO Ocean Colour Scanner (OCS) and a newly developed Canadian scanner, a compact airborne spectrographic imager (CASI). The thesis investigates the relationship between the two kinds of data sources. The remotely sensed data was collected with the OCS scanner in 1990 (during one day) and with both the OCS and the CASI in 1993 (during three days). The OCS scanner registers data in 9 wavelength bands between 380 nm and 960 nm with a 10-20 nm bandwidth, and the CASI in 288 wavelength bands between 379.57 nm and 893.5 nm (ie. spectral mode) with a spectral resolution of 2.5 nm. The remotely sensed data were extracted from the original tapes with the help of the CSIRO and supplied software and digital sample areas (band value means) for each lake were subsequently extracted for data manipulation and statistical analysis. Field data was captured concurrently with the remotely sensed data in 1993 by lake hopping using a light aircraft with floats. The field data used for analysis with the remotely sensed data were the laboratory determined g440 values from the 1993 water samples collated with g440 values determined from earlier years. No spectro-radiometric data of the lakes, data of incoming irradiance or ancillary climatic data were captured during the remote sensing missions. The sections of the background chapter in the thesis provide a background to the research both in regards to remote sensing of water quality and the relationship between remotely sensed spectral data and water quality parameters, as well as a description of the Tasmanian lakes flown. The lakes were divided into four groups based on results from previous studies and optical parameters, especially aquatic humus concentrations as measured from field captured data. The four groups consist of the ‘green” clear water lakes mostly situated on the Central Plateau, the ‘brown” highly dystrophic lakes in western Tasmania, the ‘corridor” lakes situated along a corridor of change lying approximately between the two lines denoting the Jurassic edge and 1200 mm isohyet, and the ‘eastern, turbid” lakes make up the fourth group. The analytical part of the research work was mostly concerned with manipulating and analysing the CASI data because of its higher spectral resolution. The research explores methods to apply corrections to this data to reduce the disturbing effects of varying illumination and atmospheric conditions. Three different methods were attempted. In the first method two different standardisation formulas are applied to the data as well as ‘day correction” factors calculated from data from one of the lakes, Lake Rolleston, which had data captured for all three days of the remote sensing operations. The standardisation formulas were also applied to the OCS data. In second method an attempt to reduce the effects of the atmosphere was performed using spectro-radiometric captured in 1988 for one of the lakes flown, Great Lake. All the lake sample data were time normalised using general irradiance data obtained from the University of Tasmania and the sky portion as calculated from Great Lake upwelling irradiance data was then subtracted. The last method involved using two different band ratios to eliminate atmospheric effects. Statistical analysis was applied to the data resulting from the three methods to try to describe the relationship between the remotely sensed data and the field captured data. Discriminant analysis, cluster analysis and factor analysis using principal component analysis (pea) were applied to the remotely sensed data and the field data. The factor scores resulting from the pca were regressed against the field collated data of g440 as were the values resulting from last method. The results from the statistical analysis of the data from the first method show that the lakes group well (100%) against the predetermined groups using discriminant analysis applied to the remotely sensed CASI data. Most variance in the data are contained in the first factor resulting from pca regardless of data manipulation method. Regression of the factor scores against g440 field data show a strong non- linear relationship and a one-sided linear regression test is therefore considered an inappropriate analysis method to describe the dataset relationships. The research has shown that with the available data, correction and analysis methods, and within the scope of the Masters study, it was not possible to establish the relationships between the remotely sensed data and the field measured parameters as hoped. The main reason for this was the failure to retrieve remotely sensed lake signatures adequately corrected for atmospheric noise for comparison with the field data. This in turn is a result of the lack of detailed ancillary information needed to apply available established methods for noise reduction - to apply these methods we require field spectroradiometric measurements and environmental information of the varying conditions both within the study area and within the time frame of capture of the remotely sensed data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a study carried out with customers with credit card of a large retailer to measure the risk of abandonment of a relationship, when this has already purchase history. Two activities are the most important in this study: the theoretical and methodological procedures. The first step was to the understanding of the problem, the importance of theme and the definition of search methods. The study brings a bibliographic survey comprising several authors and shows that the loyalty of customers is the basis that gives sustainability and profitability for organizations of various market segments, examines the satisfaction as the key to success for achievement and specially for the loyalty of customers. To perform this study were adjusted logistic-linear models and through the test Kolmogorov - Smirnov (KS) and the curve Receiver Operating Characteristic (ROC) selected the best model. Had been used cadastral and transactional data of 100,000 customers of credit card issuer, the software used was SPSS which is a modern system of data manipulation, statistical analysis and presentation graphics. In research, we identify the risk of each customer leave the product through a score.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A quantificação do risco país – e do risco político em particular – levanta várias dificuldades às empresas, instituições, e investidores. Como os indicadores econômicos são atualizados com muito menos freqüência do que o Facebook, compreender, e mais precisamente, medir – o que está ocorrendo no terreno em tempo real pode constituir um desafio para os analistas de risco político. No entanto, com a crescente disponibilidade de “big data” de ferramentas sociais como o Twitter, agora é o momento oportuno para examinar os tipos de métricas das ferramentas sociais que estão disponíveis e as limitações da sua aplicação para a análise de risco país, especialmente durante episódios de violência política. Utilizando o método qualitativo de pesquisa bibliográfica, este estudo identifica a paisagem atual de dados disponíveis a partir do Twitter, analisa os métodos atuais e potenciais de análise, e discute a sua possível aplicação no campo da análise de risco político. Depois de uma revisão completa do campo até hoje, e tendo em conta os avanços tecnológicos esperados a curto e médio prazo, este estudo conclui que, apesar de obstáculos como o custo de armazenamento de informação, as limitações da análise em tempo real, e o potencial para a manipulação de dados, os benefícios potenciais da aplicação de métricas de ferramentas sociais para o campo da análise de risco político, particularmente para os modelos qualitativos-estruturados e quantitativos, claramente superam os desafios.