858 resultados para downloading of data


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A patient-centric DRM approach is proposed for protecting privacy of health records stored in a cloud storage based on the patient's preferences and without the need to trust the service provider. Contrary to the current server-side access control solutions, this approach protects the privacy of records from the service provider, and also controls the usage of data after it is released to an authorized user.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling. Some of the core components of data modelling are addressed. A selection of results from the first data modelling activity implemented during the second year (2010; second grade) of a current longitudinal study are reported. Data modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. Reported here are children's abilities to identify diverse and complex attributes, sort and classify data in different ways, and create and interpret models to represent their data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper discusses practical issues related to the use of the division model for lens distortion in multi-view geometry computation. A data normalisation strategy is presented, which has been absent from previous discussions on the topic. The convergence properties of the Rectangular Quadric Eigenvalue Problem solution for computing division model distortion are examined. It is shown that the existing method can require more than 1000 iterations when dealing with severe distortion. A method is presented for accelerating convergence to less than 10 iterations for any amount of distortion. The new method is shown to produce equivalent or better results than the existing method with up to two orders of magnitude reduction in iterations. Through detailed simulation it is found that the number of data points used to compute geometry and lens distortion has a strong influence on convergence speed and solution accuracy. It is recommended that more than the minimal number of data points be used when computing geometry using a robust estimator such as RANSAC. Adding two to four extra samples improves the convergence rate and accuracy sufficiently to compensate for the increased number of samples required by the RANSAC process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We read the excellent review of telemonitoring in chronic heart failure (CHF)1 with interest and commend the authors on the proposed classification of telemedical remote management systems according to the type of data transfer, decision ability and level of integration. However, several points require clarification in relation to our Cochrane review of telemonitoring and structured telephone support2. We included a study by Kielblock3. We corresponded directly with this study team specifically to find out whether or not this was a randomised study and were informed that it was a randomised trial, albeit by date of birth. We note in our review2 that this randomisation method carries a high risk of bias. Post-hoc metaanalyses without these data demonstrate no substantial change to the effect estimates for all cause mortality (original risk ratio (RR) 0·66 [95% CI 0·54, 0·81], p<0·0001; revised RR 0·72 [95% CI 0·57, 0·92], p=0·008), all-cause hospitalisation (original RR 0·91 [95% CI 0·84, 0·99] p=0·02; revised RR 0.92 [95% CI 0·84, 1·02], p=0·10 ) or CHF-related hospitalisation (original RR 0·79 [95% CI 0·67, 0·94] p=0·008; revised RR 0·75 [95% CI 0·60, 0·94] p=0·01). Secondly, we would classify the Tele-HF study4, 5 as structured telephone support, rather than telemonitoring. Again, inclusion of these data alters the point-estimate but not the overall result of the meta-analyses4. Finally, our review2 does not include invasive telemonitoring as the search strategy was not designed to capture these studies. Therefore direct comparison of our review findings with recent studies of these interventions is not recommended.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). As background to this challenge, a brief review is given of current practice in the selection of major public sector infrastructure in Australia, along with a review of the related literature concerning the Multi-Attribute Utility Approach (MAUA) and the effect of MAUA on the role of risk management in procurement selection. To contribute towards addressing the key weaknesses of MAUA, a new first-order procurement decision making model is mentioned. A brief summary is also given of the research method and hypothesis used to test and develop the new procurement model and which uses competition as the dependent variable and as a proxy for VfM. The hypothesis is given as follows: When the actual procurement mode matches the theoretical/predicted procurement mode (informed by the new procurement model), then actual competition is expected to match optimum competition (based on actual prevailing capacity vis-à-vis the theoretical/predicted procurement mode) and subject to efficient tendering. The aim of this paper is to report on progress towards testing this hypothesis in terms of an analysis of two of the four data components in the hypothesis. That is, actual procurement and actual competition across 87 road and health major public sector projects in Australia. In conclusion, it is noted that the Global Financial Crisis (GFC) has seen a significant increase in competition in public sector major road and health infrastructure and if any imperfections in procurement and/or tendering are discernible, then this would create the opportunity, through the deployment of economic principles embedded in the new procurement model and/or adjustments in tendering, to maintain some of this higher level post-GFC competition throughout the next business cycle/upturn in demand including private sector demand. Finally, the paper previews the next steps in the research with regard to collection and analysis of data concerning theoretical/predicted procurement and optimum competition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides fundamental understanding for the use of cumulative plots for travel time estimation on signalized urban networks. Analytical modeling is performed to generate cumulative plots based on the availability of data: a) Case-D, for detector data only; b) Case-DS, for detector data and signal timings; and c) Case-DSS, for detector data, signal timings and saturation flow rate. The empirical study and sensitivity analysis based on simulation experiments have observed the consistency in performance for Case-DS and Case-DSS, whereas, for Case-D the performance is inconsistent. Case-D is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Teacher professional development provided by education advisors as one-off, centrally offered sessions does not always result in change in teacher knowledge, beliefs, attitudes or practice in the classroom. As the mathematics education advisor in this study, I set out to investigate a particular method of professional development so as to influence change in a practising classroom teacher’s knowledge and practices. The particular method of professional development utilised in this study was based on several principles of effective teacher professional development and saw me working regularly in a classroom with the classroom teacher as well as providing ongoing support for her for a full school year. The intention was to document the effects of this particular method of professional development in terms of the classroom teacher’s and my professional growth to provide insights for others working as education advisors. The professional development for the classroom teacher consisted of two components. The first was the co-operative development and implementation of a mental computation instructional program for the Year 3 class. The second component was the provision of ongoing support for the classroom teacher by the education advisor. The design of the professional development and the mental computation instructional program were progressively refined throughout the year. The education advisor fulfilled multiple roles in the study as teacher in the classroom, teacher educator working with the classroom teacher and researcher. Examples of the professional growth of the classroom teacher and the education advisor which occurred as sequences of changes (growth networks, Hollingsworth, 1999) in the domains of the professional world of the classroom teacher and education advisor were drawn from the large body of data collected through regular face-to-face and email communications between the classroom teacher and the education advisor as well as from transcripts of a structured interview. The Interconnected Model of Professional Growth (Clarke & Hollingsworth, 2002; Hollingsworth, 1999) was used to summarise and represent each example of the classroom teacher’s professional growth. A modified version of this model was used to summarise and represent the professional growth of the education advisor. This study confirmed that the method of professional development utilised could lead to significant teacher professional growth related directly to her work in the classroom. Using the Interconnected Model of Professional Growth to summarise and represent the classroom teacher’s professional growth and the modified version for my professional growth assisted with the recognition of examples of how we both changed. This model has potential to be used more widely by education advisors when preparing, implementing, evaluating and following-up on planned teacher professional development activities. The mental computation instructional program developed and trialled in the study was shown to be a successful way of sequencing and managing the teaching of mental computation strategies and related number sense understandings to Year 3 students. This study was conducted in one classroom, with one teacher in one school. The strength of this study was the depth of teacher support provided made possible by the particular method of the professional development, and the depth of analysis of the process. In another school, or with another teacher, this might not have been as successful. While I set out to change my practice as an education advisor I did not expect the depth of learning I experienced in terms of my knowledge, beliefs, attitudes and practices as an educator of teachers. This study has changed the way in which I plan to work as an education advisor in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used Monte Carlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom, and irradiated with a 12 field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three dimensional dose distributions obtained from Monte Carlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The Monte Carlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The Monte Carlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilised in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis examines consumer initiated value co-creation behaviour in the context of convergent mobile online services using a Service-Dominant logic (SD logic) theoretical framework. It focuses on non-reciprocal marketing phenomena such as open innovation and user generated content whereby new viable business models are derived and consumer roles and community become essential to the success of business. Attention to customers. roles and personalised experiences in value co-creation has been recognised in the literature (e.g., Prahalad & Ramaswamy, 2000; Prahalad, 2004; Prahalad & Ramaswamy, 2004). Similarly, in a subsequent iteration of their 2004 version of the foundations of SD logic, Vargo and Lusch (2006) replaced the concept of value co-production with value co-creation and suggested that a value co-creation mindset is essential to underpin the firm-customer value creation relationship. Much of this focus, however, has been limited to firm initiated value co-creation (e.g., B2B or B2C), while consumer initiated value creation, particularly consumer-to-consumer (C2C) has received little attention in the SD logic literature. While it is recognised that not every consumer wishes to make the effort to engage extensively in co-creation processes (MacDonald & Uncles, 2009), some consumers may not be satisfied with a standard product, instead they engage in the effort required for personalisation that potentially leads to greater value for themselves, and which may benefit not only the firm, but other consumers as well. Literature suggests that there are consumers who do, and as a result initiate such behaviour and expend effort to engage in co-creation activity (e.g., Gruen, Osmonbekov and Czaplewski, 2006; 2007 MacDonald & Uncles, 2009). In terms of consumers. engagement in value proposition (co-production) and value actualisation (co-creation), SD logic (Vargo & Lusch, 2004, 2008) provides a new lens that enables marketing scholars to transcend existing marketing theory and facilitates marketing practitioners to initiate service centric and value co-creation oriented marketing practices. Although the active role of the consumer is acknowledged in the SD logic oriented literature, we know little about how and why consumers participate in a value co-creation process (Payne, Storbacka, & Frow, 2008). Literature suggests that researchers should focus on areas such as C2C interaction (Gummesson 2007; Nicholls 2010) and consumer experience sharing and co-creation (Belk 2009; Prahalad & Ramaswamy 2004). In particular, this thesis seeks to better understand consumer initiated value co-creation, which is aligned with the notion that consumers can be resource integrators (Baron & Harris, 2008) and more. The reason for this focus is that consumers today are more empowered in both online and offline contexts (Füller, Mühlbacher, Matzler, & Jawecki, 2009; Sweeney, 2007). Active consumers take initiatives to engage and co-create solutions with other active actors in the market for their betterment of life (Ballantyne & Varey, 2006; Grönroos & Ravald, 2009). In terms of the organisation of the thesis, this thesis first takes a „zoom-out. (Vargo & Lusch, 2011) approach and develops the Experience Co-Creation (ECo) framework that is aligned with balanced centricity (Gummesson, 2008) and Actor-to-Actor worldview (Vargo & Lusch, 2011). This ECo framework is based on an extended „SD logic friendly lexicon. (Lusch & Vargo, 2006): value initiation and value initiator, value-in-experience, betterment centricity and betterment outcomes, and experience co-creation contexts derived from five gaps identified from the SD logic literature review. The framework is also designed to accommodate broader marketing phenomena (i.e., both reciprocal and non-reciprocal marketing phenomena). After zooming out and establishing the ECo framework, the thesis takes a zoom-in approach and places attention back on the value co-creation process. Owing to the scope of the current research, this thesis focuses specifically on non-reciprocal value co-creation phenomena initiated by consumers in online communities. Two emergent concepts: User Experience Sharing (UES) and Co-Creative Consumers are proposed grounded in the ECo framework. Together, these two theorised concepts shed light on the following two propositions: (1) User Experience Sharing derives value-in-experience as consumers make initiative efforts to participate in value co-creation, and (2) Co-Creative Consumers are value initiators who perform UES. Three research questions were identified underpinning the scope of this research: RQ1: What factors influence consumers to exhibit User Experience Sharing behaviour? RQ2: Why do Co-Creative Consumers participate in User Experience Sharing as part of value co-creation behaviour? RQ3: What are the characteristics of Co-Creative Consumers? To answer these research questions, two theoretical models were developed: the User Experience Sharing Behaviour Model (UESBM) grounded in the Theory of Planned Behaviour framework, and the Co-Creative Consumer Motivation Model (CCMM) grounded in the Motivation, Opportunity, Ability framework. The models use SD logic consistent constructs and draw upon multiple streams of literature including consumer education, consumer psychology and consumer behaviour, and organisational psychology and organisational behaviour. These constructs include User Experience Sharing with Other Consumers (UESC), User Experience Sharing with Firms (UESF), Enjoyment in Helping Others (EIHO), Consumer Empowerment (EMP), Consumer Competence (COMP), and Intention to Engage in User Experience Sharing (INT), Attitudes toward User Experience Sharing (ATT) and Subjective Norm (SN) in the UESBM, and User Experience Sharing (UES), Consumer Citizenship (CIT), Relating Needs of Self (RELS) and Relating Needs of Others (RELO), Newness (NEW), Mavenism (MAV), Use Innovativeness (UI), Personal Initiative (PIN) and Communality (COMU) in the CCMM. Many of these constructs are relatively new to marketing and require further empirical evidence for support. Two studies were conducted to underpin the corresponding research questions. Study One was conducted to calibrate and re-specify the proposed models. Study Two was a replica study to confirm the proposed models. In Study One, data were collected from a PC DIY online community. In Study Two, a majority of data were collected from Apple product online communities. The data were examined using structural equation modelling and cluster analysis. Considering the nature of the forums, the Study One data is considered to reflect some characteristics of Prosumers and the Study Two data is considered to reflect some characteristics of Innovators. The results drawn from two independent samples (N = 326 and N = 294) provide empirical support for the overall structure theorised in the research models. The results in both models show that Enjoyment in Helping Others and Consumer Competence in the UESBM, and Consumer Citizenship and Relating Needs in CCMM have significant impacts on UES. The consistent results appeared in both Study One and Study Two. The results also support the conceptualisation of Co-Creative Consumers and indicate Co-Creative Consumers are individuals who are able to relate the needs of themselves and others and feel a responsibility to share their valuable personal experiences. In general, the results shed light on "How and why consumers voluntarily participate in the value co-creation process?. The findings provide evidence to conceptualise User Experience Sharing behaviour as well as the Co-Creative Consumer using the lens of SD logic. This research is a pioneering study that incorporates and empirically tests SD logic consistent constructs to examine a particular area of the logic – that is consumer initiated value co-creation behaviour. This thesis also informs practitioners about how to facilitate and understand factors that engage with either firm or consumer initiated online communities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last few years we have observed a proliferation of approaches for clustering XML docu- ments and schemas based on their structure and content. The presence of such a huge amount of approaches is due to the different applications requiring the XML data to be clustered. These applications need data in the form of similar contents, tags, paths, structures and semantics. In this paper, we first outline the application contexts in which clustering is useful, then we survey approaches so far proposed relying on the abstract representation of data (instances or schema), on the identified similarity measure, and on the clustering algorithm. This presentation leads to draw a taxonomy in which the current approaches can be classified and compared. We aim at introducing an integrated view that is useful when comparing XML data clustering approaches, when developing a new clustering algorithm, and when implementing an XML clustering compo- nent. Finally, the paper moves into the description of future trends and research issues that still need to be faced.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling. Results are reported from the first year of a 3-year longitudinal study in which three classes of first-grade children (6-year-olds) and their teachers engaged in data modelling activities. The theme of Looking after our Environment, part of the children’s science curriculum, provided the task context. The goals for the two activities addressed here included engaging children in core components of data modelling, namely, selecting attributes, structuring and representing data, identifying variation in data, and making predictions from given data. Results include the various ways in which children represented and re represented collected data, including attribute selection, and the metarepresentational competence they displayed in doing so. The “data lenses” through which the children dealt with informal inference (variation and prediction) are also reported.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With rapid and continuing growth of learning support initiatives in mathematics and statistics found in many parts of the world, and with the likelihood that this trend will continue, there is a need to ensure that robust and coherent measures are in place to evaluate the effectiveness of these initiatives. The nature of learning support brings challenges for measurement and analysis of its effects. After briefly reviewing the purpose, rationale for, and extent of current provision, this article provides a framework for those working in learning support to think about how their efforts can be evaluated. It provides references and specific examples of how workers in this field are collecting, analysing and reporting their findings. The framework is used to structure evaluation in terms of usage of facilities, resources and services provided, and also in terms of improvements in performance of the students and staff who engage with them. Very recent developments have started to address the effects of learning support on the development of deeper approaches to learning, the affective domain and the development of communities of practice of both learners and teachers. This article intends to be a stimulus to those who work in mathematics and statistics support to gather even richer, more valuable, forms of data. It provides a 'toolkit' for those interested in evaluation of learning support and closes by referring to an on-line resource being developed to archive the growing body of evidence. © 2011 Taylor & Francis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The International Classification of Diseases (ICD) is used to categorise diseases, injuries and external causes, and is a key epidemiological tool enabling the storage and retrieval of data from health and vital records to produce core international mortality and morbidity statistics. The ICD is updated periodically to ensure the classification remains current and work is now underway to develop the next revision, ICD-11. There have been almost 20 years since the last ICD edition was published and over 60 years since the last substantial structural revision of the external causes chapter. Revision of such a critical tool requires transparency and documentation to ensure that changes made to the classification system are recorded comprehensively for future reference. In this paper, the authors provide a history of external causes classification development and outline the external cause structure. Approaches to manage ICD-10 deficiencies are discussed and the ICD-11 revision approach regarding the development of, rationale for and implications of proposed changes to the chapter are outlined. Through improved capture of external cause concepts in ICD-11, a stronger evidence base will be available to inform injury prevention, treatment, rehabilitation and policy initiatives to ultimately contribute to a reduction in injury morbidity and mortality.