1000 resultados para DATA
Resumo:
The possible nonplanar distortions of the amide group in formamide, acetamide, N-methylacetamide, and N-ethylacetamide have been examined using CNDO/2 and INDO methods. The predictions from these methods are compared with the results obtained from X-ray and neutron diffraction studies on crystals of small open peptides, cyclic peptides, and amides. It is shown that the INDO results are in good agreement with observations, and that the dihedral angles N and defining the nonplanarity of the amide unit are correlated approximately by the relation N = -2, while C is small and uncorrelated with . The present study indicates that the nonplanar distortions at the nitrogen atom of the peptide unit may have to be taken into consideration, in addition to the variation in the dihedral angles (,), in working out polypeptide and protein structures.
Resumo:
This study identified the areas of poor specificity in national injury hospitalization data and the areas of improvement and deterioration in specificity over time. A descriptive analysis of ten years of national hospital discharge data for Australia from July 2002-June 2012 was performed. Proportions and percentage change of defined/undefined codes over time was examined. At the intent block level, accidents and assault were the most poorly defined with over 11% undefined in each block. The mechanism blocks for accidents showed a significant deterioration in specificity over time with up to 20% more undefined codes in some mechanisms. Place and activity were poorly defined at the broad block level (43% and 72% undefined respectively). Private hospitals and hospitals in very remote locations recorded the highest proportion of undefined codes. Those aged over 60 years and females had the higher proportion of undefined code usage. This study has identified significant, and worsening, deficiencies in the specificity of coded injury data in several areas. Focal attention is needed to improve the quality of injury data, especially on those identified in this study, to provide the evidence base needed to address the significant burden of injury in the Australian community.
Resumo:
It has long been thought that tropical rainfall retrievals from satellites have large errors. Here we show, using a new daily 1 degree gridded rainfall data set based on about 1800 gauges from the India Meteorology Department (IMD), that modern satellite estimates are reasonably close to observed rainfall over the Indian monsoon region. Daily satellite rainfalls from the Global Precipitation Climatology Project (GPCP 1DD) and the Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) are available since 1998. The high summer monsoon (June-September) rain over the Western Ghats and Himalayan foothills is captured in TMPA data. Away from hilly regions, the seasonal mean and intraseasonal variability of rainfall (averaged over regions of a few hundred kilometers linear dimension) from both satellite products are about 15% of observations. Satellite data generally underestimate both the mean and variability of rain, but the phase of intraseasonal variations is accurate. On synoptic timescales, TMPA gives reasonable depiction of the pattern and intensity of torrential rain from individual monsoon low-pressure systems and depressions. A pronounced biennial oscillation of seasonal total central India rain is seen in all three data sets, with GPCP 1DD being closest to IMD observations. The new satellite data are a promising resource for the study of tropical rainfall variability.
Resumo:
An application that translates raw thermal melt curve data into more easily assimilated knowledge is described. This program, called ‘Meltdown’, performs a number of data remediation steps before classifying melt curves and estimating melting temperatures. The final output is a report that summarizes the results of a differential scanning fluorimetry experiment. Meltdown uses a Bayesian classification scheme, enabling reproducible identification of various trends commonly found in DSF datasets. The goal of Meltdown is not to replace human analysis of the raw data, but to provide a sensible interpretation of the data to make this useful experimental technique accessible to naïve users, as well as providing a starting point for detailed analyses by more experienced users.
Resumo:
Compositional data analysis usually deals with relative information between parts where the total (abundances, mass, amount, etc.) is unknown or uninformative. This article addresses the question of what to do when the total is known and is of interest. Tools used in this case are reviewed and analysed, in particular the relationship between the positive orthant of D-dimensional real space, the product space of the real line times the D-part simplex, and their Euclidean space structures. The first alternative corresponds to data analysis taking logarithms on each component, and the second one to treat a log-transformed total jointly with a composition describing the distribution of component amounts. Real data about total abundances of phytoplankton in an Australian river motivated the present study and are used for illustration.
Resumo:
A new clustering technique, based on the concept of immediato neighbourhood, with a novel capability to self-learn the number of clusters expected in the unsupervized environment, has been developed. The method compares favourably with other clustering schemes based on distance measures, both in terms of conceptual innovations and computational economy. Test implementation of the scheme using C-l flight line training sample data in a simulated unsupervized mode has brought out the efficacy of the technique. The technique can easily be implemented as a front end to established pattern classification systems with supervized learning capabilities to derive unified learning systems capable of operating in both supervized and unsupervized environments. This makes the technique an attractive proposition in the context of remotely sensed earth resources data analysis wherein it is essential to have such a unified learning system capability.
Resumo:
Self-tracking, the process of recording one's own behaviours, thoughts and feelings, is a popular approach to enhance one's self-knowledge. While dedicated self-tracking apps and devices support data collection, previous research highlights that the integration of data constitutes a barrier for users. In this study we investigated how members of the Quantified Self movement---early adopters of self-tracking tools---overcome these barriers. We conducted a qualitative analysis of 51 videos of Quantified Self presentations to explore intentions for collecting data, methods for integrating and representing data, and how intentions and methods shaped reflection. The findings highlight two different intentions---striving for self-improvement and curiosity in personal data---which shaped how these users integrated data, i.e. the effort required. Furthermore, we identified three methods for representing data---binary, structured and abstract---which influenced reflection. Binary representations supported reflection-in-action, whereas structured and abstract representations supported iterative processes of data collection, integration and reflection. For people tracking out of curiosity, this iterative engagement with personal data often became an end in itself, rather than a means to achieve a goal. We discuss how these findings contribute to our current understanding of self-tracking amongst Quantified Self members and beyond, and we conclude with directions for future work to support self-trackers with their aspirations.
Resumo:
Big Data and Learning Analytics’ promise to revolutionise educational institutions, endeavours, and actions through more and better data is now compelling. Multiple, and continually updating, data sets produce a new sense of ‘personalised learning’. A crucial attribute of the datafication, and subsequent profiling, of learner behaviour and engagement is the continual modification of the learning environment to induce greater levels of investment on the parts of each learner. The assumption is that more and better data, gathered faster and fed into ever-updating algorithms, provide more complete tools to understand, and therefore improve, learning experiences through adaptive personalisation. The argument in this paper is that Learning Personalisation names a new logistics of investment as the common ‘sense’ of the school, in which disciplinary education is ‘both disappearing and giving way to frightful continual training, to continual monitoring'.
Resumo:
This thesis introduced two novel reputation models to generate accurate item reputation scores using ratings data and the statistics of the dataset. It also presented an innovative method that incorporates reputation awareness in recommender systems by employing voting system methods to produce more accurate top-N item recommendations. Additionally, this thesis introduced a personalisation method for generating reputation scores based on users' interests, where a single item can have different reputation scores for different users. The personalised reputation scores are then used in the proposed reputation-aware recommender systems to enhance the recommendation quality.
Resumo:
Urban growth identification, quantification, knowledge of rate and the trends of growth would help in regional planning for better infrastructure provision in environmentally sound way. This requires analysis of spatial and temporal data, which help in quantifying the trends of growth on spatial scale. Emerging technologies such as Remote Sensing, Geographic Information System (GIS) along with Global Positioning System (GPS) help in this regard. Remote sensing aids in the collection of temporal data and GIS helps in spatial analysis. This paper focuses on the analysis of urban growth pattern in the form of either radial or linear sprawl along the Bangalore - Mysore highway. Various GIS base layers such as builtup areas along the highway, road network, village boundary etc. were generated using collateral data such as the Survey of India toposheet, etc. Further, this analysis was complemented with the computation of Shannon's entropy, which helped in identifying prevalent sprawl zone, rate of growth and in delineating potential sprawl locations. The computation Shannon's entropy helped in delineating regions with dispersed and compact growth. This study reveals that the Bangalore North and South taluks contributed mainly to the sprawl with 559% increase in built-up area over a period of 28 years and high degree of dispersion. The Mysore and Srirangapatna region showed 128% change in built-up area and a high potential for sprawl with slightly high dispersion. The degree of sprawl was found to be directly proportional to the distances from the cities.
Resumo:
Poly(styrene peroxide) has been prepared and characterized. Nuclear magnetlc resonance (NMR) spectra Of the polymer show the shift Of aliphatic protons. Differential scanning calorimetric (DSC) and differential thermal analysis (DTA) results show anexothermic peak around 110 OC which is characteristic of peroxide decomposition.
Resumo:
Historically, school leaders have occupied a somewhat ambiguous position within networks of power. On the one hand, they appear to be celebrated as what Ball (2003) has termed the ‘new hero of educational reform'; on the other, they are often ‘held to account’ through those same performative processes and technologies. These have become compelling in schools and principals are ‘doubly bound’ through this. Adopting a Foucauldian notion of discursive production, this paper addresses the ways that the discursive ‘field’ of ‘principal’ (within larger regimes of truth such as schools, leadership, quality and efficiency) is produced. It explores how individual principals understand their roles and ethics within those practices of audit emerging in school governance, and how their self-regulation is constituted through NAPLAN – the National Assessment Program, Literacy and Numeracy. A key effect of NAPLAN has been the rise of auditing practices that change how education is valued. Open-ended interviews with 13 primary and secondary school principals from Western Australia, South Australia and New South Wales asked how they perceived NAPLAN's impact on their work, their relationships within their school community and their ethical practice.
Resumo:
We present some results on multicarrier analysis of magnetotransport data, Both synthetic as well as data from narrow gap Hg0.8Cd0.2Te samples are used to demonstrate applicability of various algorithms vs. nonlinear least square fitting, Quantitative Mobility Spectrum Analysis (QMSA) and Maximum Entropy Mobility Spectrum Analysis (MEMSA). Comments are made from our experience oil these algorithms, and, on the inversion procedure from experimental R/sigma-B to S-mu specifically with least square fitting as an example. Amongst the conclusions drawn are: (i) Experimentally measured resistivity (R-xx, R-xy) should also be used instead of just the inverted conductivity (sigma(xx), sigma(xy)) to fit data to semiclassical expressions for better fits especially at higher B. (ii) High magnetic field is necessary to extract low mobility carrier parameters. (iii) Provided the error in data is not large, better estimates to carrier parameters of remaining carrier species can be obtained at any stage by subtracting highest mobility carrier contribution to sigma from the experimental data and fitting with the remaining carriers. (iv)Even in presence of high electric field, an approximate multicarrier expression can be used to guess the carrier mobilities and their variations before solving the full Boltzmann equation.
Resumo:
The idea of extracting knowledge in process mining is a descendant of data mining. Both mining disciplines emphasise data flow and relations among elements in the data. Unfortunately, challenges have been encountered when working with the data flow and relations. One of the challenges is that the representation of the data flow between a pair of elements or tasks is insufficiently simplified and formulated, as it considers only a one-to-one data flow relation. In this paper, we discuss how the effectiveness of knowledge representation can be extended in both disciplines. To this end, we introduce a new representation of the data flow and dependency formulation using a flow graph. The flow graph solves the issue of the insufficiency of presenting other relation types, such as many-to-one and one-to-many relations. As an experiment, a new evaluation framework is applied to the Teleclaim process in order to show how this method can provide us with more precise results when compared with other representations.