903 resultados para linked open data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Porphyrins are one of Nature’s essential building blocks that play an important role in several biological systems including oxygen transport, photosynthesis, and enzymes. Their capacity to absorb visible light, facilitate oxidation and reduction, and act as energy- and electron-transfer agents, in particular when several are held closely together, is of interest to chemists who seek to mimic Nature and to make and use these compounds in order to synthesise novel advanced materials. During this project 26 new 5,10-diarylsubstituted porphyrin monomers, 10 dimers, and 1 tetramer were synthesised. The spectroscopic and structural properties of these compounds were investigated using 1D/2D 1H NMR, UV/visible, ATR-IR and Raman spectroscopy, mass spectrometry, X-ray crystallography, electrochemistry and gel permeation chromatography. Nitration, amination, bromination and alkynylation of only one as well as both of the meso positions of the porphyrin monomers have resulted in the expansion of the synthetic possibilities for the 5,10-diarylsubstituted porphyrins. The development of these new porphyrin monomers has led to the successful synthesis of new azo- and butadiyne-linked dimers. The functionalisation of these compounds was investigated, in particular nitration, amination, and bromination. The synthesised dimers containing the azo bridge have absorption spectra that show a large split in the Soret bands and intense Q-bands that have been significantly redshifted. The butadiyne dimers also have intense, red-shifted Q-bands but smaller Soret band splittings. Crystal structures of two new azoporphyrins have been acquired and compared to the azoporphyrin previously synthesised from 5,10,15- triarylsubstituted porphyrin monomers. A completely new cyclic porphyrin oligomer (CPO) was synthesised comprising four porphyrin monomers linked by azo and butadiyne bridges. This is the first cyclic tetramer that has both the azo and butadiyne linking groups. The absorption spectrum of the tetramer exhibits a large Soret split making it more similar to the azo- dimers than the butadiyne-linked dimers. The spectroscopic characteristics of the synthesised tetramer have been compared to the characteristics of other cyclic porphyrin tetramers. The collected data indicate that the new synthesised cyclic tetramer has a more efficient ð-overlap and a better ground state electronic communication between the porphyrin rings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While externally moderated standards-based assessment has been practised in Queensland senior schooling for more than three decades, there has been no such practice in the middle years. With the introduction of standards at state and national levels in these years, teacher judgement as developed in moderation practices is now vital. This paper argues, that in this context of assessment reform, standards intended to inform teacher judgement and to build assessment capacity are necessary but not sufficient for maintaining teacher and public confidence in schooling. Teacher judgement is intrinsic to moderation, and to professional practice, and can no longer remain private. Moderation too is intrinsic to efforts by the profession to realise judgements that are defensible, dependable and open to scrutiny. Moderation can no longer be considered an optional extra and requires system-level support especially if, as intended, the standards are linked to system-wide efforts to improve student learning. In presenting this argument we draw on an Australian Research Council funded study with key industry partners (the Queensland Studies Authority and the National Council for Curriculum and Assessment of the Republic of Ireland). The data analysed included teacher interview data and additional teacher talk during moderation sessions. These were undertaken during the initial phase of policy development. The analysis identified those issues that emerge in moderation meetings that are designed to reach consistent, reliable judgements. Of interest are the different ways in which teachers talked through and interacted with one another to reach agreement about the quality of student work in the application of standards. There is evidence of differences in the way that teachers made compensations and trade-offs in their award of grades, dependent on the subject domain in which they teach. This article concludes with some empirically derived insights into moderation practices as policy and social events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Efforts to prevent the development of overweight and obesity have increasingly focused early in the life course as we recognise that both metabolic and behavioural patterns are often established within the first few years of life. Randomised controlled trials (RCTs) of interventions are even more powerful when, with forethought, they are synthesised into an individual patient data (IPD) prospective meta-analysis (PMA). An IPD PMA is a unique research design where several trials are identified for inclusion in an analysis before any of the individual trial results become known and the data are provided for each randomised patient. This methodology minimises the publication and selection bias often associated with a retrospective meta-analysis by allowing hypotheses, analysis methods and selection criteria to be specified a priori. Methods/Design: The Early Prevention of Obesity in CHildren (EPOCH) Collaboration was formed in 2009. The main objective of the EPOCH Collaboration is to determine if early intervention for childhood obesity impacts on body mass index (BMI) z scores at age 18-24 months. Additional research questions will focus on whether early intervention has an impact on children’s dietary quality, TV viewing time, duration of breastfeeding and parenting styles. This protocol includes the hypotheses, inclusion criteria and outcome measures to be used in the IPD PMA. The sample size of the combined dataset at final outcome assessment (approximately 1800 infants) will allow greater precision when exploring differences in the effect of early intervention with respect to pre-specified participant- and intervention-level characteristics. Discussion: Finalisation of the data collection procedures and analysis plans will be complete by the end of 2010. Data collection and analysis will occur during 2011-2012 and results should be available by 2013. Trial registration number: ACTRN12610000789066

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Being in paid employment is socially valued, and is linked to health, financial security and time use. Issues arising from a lack of occupational choice and control, and from diminished role partnerships are particularly problematic in the lives of people with an intellectual disability. Informal support networks are shown to influence work opportunities for people without disabilities, but their impact on the work experiences of people with disability has not been thoroughly explored. The experience of 'work' and preparation for work was explored with a group of four people with an intellectual disability (the participants) and the key members of their informal support networks (network members) in New South Wales, Australia. Network members and participants were interviewed and participant observations of work and other activities were undertaken. Data analysis included open, conceptual and thematic coding. Data analysis software assisted in managing the large datasets across multiple team members. The insight and actions of network members created and sustained the employment and support opportunities that effectively matched the needs and interests of the participants. Recommendations for future research are outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. Methods: A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. Results: In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most ‘under-coded’ by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Conclusion: Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientists need to transfer semantically similar queries across multiple heterogeneous linked datasets. These queries may require data from different locations and the results are not simple to combine due to differences between datasets. A query model was developed to make it simple to distribute queries across different datasets using RDF as the result format. The query model, based on the concept of publicly recognised namespaces for parts of each scientific dataset, was implemented with a configuration that includes a large number of current biological and chemical datasets. The configuration is flexible, providing the ability to transparently use both private and public datasets in any query. A prototype implementation of the model was used to resolve queries for the Bio2RDF website, including both Bio2RDF datasets and other datasets that do not follow the Bio2RDF URI conventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: to assess the accuracy of data linkage across the spectrum of emergency care in the absence of a unique patient identifier, and to use the linked data to examine service delivery outcomes in an emergency department setting. Design: automated data linkage and manual data linkage were compared to determine their relative accuracy. Data were extracted from three separate health information systems: ambulance, ED and hospital inpatients, then linked to provide information about the emergency journey of each patient. The linking was done manually through physical review of records and automatically using a data linking tool (Health Data Integration) developed by the CSIRO. Match rate and quality of the linking were compared. Setting: 10, 835 patient presentations to a large, regional teaching hospital ED over a two month period (August-September 2007). Results: comparison of the manual and automated linkage outcomes for each pair of linked datasets demonstrated a sensitivity of between 95% and 99%; a specificity of between 75% and 99%; and a positive predictive value of between 88% and 95%. Conclusions: Our results indicate that automated linking provides a sound basis for health service analysis, even in the absence of a unique patient identifier. The use of an automated linking tool yields accurate data suitable for planning and service delivery purposes and enables the data to be linked regularly to examine service delivery outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been an increasing interest by governments worldwide in the potential benefits of open access to public sector information (PSI). However, an important question remains: can a government incur tortious liability for incorrect information released online under an open content licence? This paper argues that the release of PSI online for free under an open content licence, specifically a Creative Commons licence, is within the bounds of an acceptable level of risk to government, especially where users are informed of the limitations of the data and appropriate information management policies and principles are in place to ensure accountability for data quality and accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To quantify the concordance of hospital child maltreatment data with child protection service (CPS) records and identify factors associated with linkage. Methods: Multivariable logistic regression analysis was conducted following retrospective medical record review and database linkage of 884 child records from 20 hospitals and the CPS in Queensland, Australia. Results: Nearly all children with hospital assigned maltreatment codes (93.1%) had a CPS record. Of these, 85.1% had a recent notification. 29% of the linked maltreatment group (n=113) were not known to CPS prior to the hospital presentation. Almost 1/3 of children with unintentional injury hospital codes were known to CPS. Just over 24% of the linked unintentional injury group (n=34) were not known to CPS prior to the hospital presentation but became known during or after discharge from hospital. These estimates are higher than the 2006/07 annual rate of 2.39% of children being notified to CPS. Rural children were more likely to link to CPS, and children were over 3 times more likely to link if the index injury documentation included additional diagnoses or factors affecting their health. Conclusions: The system for referring maltreatment cases to CPS is generally efficient, although up to 1 in 15 children had codes for maltreatment but could not be linked to CPS data. The high proportion of children with unintentional injury codes who linked to CPS suggests clinicians and hospital-based child protection staff should be supported by further education and training to ensure children at risk are being detected by the child protection system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High levels of sitting have been linked with poor health outcomes. Previously a pragmatic MTI accelerometer data cut-point (100 count/min-1) has been used to estimate sitting. Data on the accuracy of this cut-point is unavailable. PURPOSE: To ascertain whether the 100 count/min-1 cut-point accurately isolates sitting from standing activities. METHODS: Participants fitted with an MTI accelerometer were observed performing a range of sitting, standing, light & moderate activities. 1-min epoch MTI data were matched to observed activities, then re-categorized as either sitting or not using the 100 count/min-1 cut-point. Self-report demographics and current physical activity were collected. Generalized estimating equation for repeated measures with a binary logistic model analyses (GEE), corrected for age, gender and BMI, were conducted to ascertain the odds of the MTI data being misclassified. RESULTS: Data were from 26 healthy subjects (8 men; 50% aged <25 years; mean BMI (SD) 22.7(3.8)m/kg2). MTI sitting and standing data mode was 0 count/min-1, with 46% of sitting activities and 21% of standing activities recording 0 count/min-1. The GEE was unable to accurately isolate sitting from standing activities using the 100 count/min-1 cut-point, since all sitting activities were incorrectly predicted as standing (p=0.05). To further explore the sensitivity of MTI data to delineate sitting from standing, the upper 95% confidence interval of the mean for the sitting activities (46 count/min-1) was used to re-categorise the data; this resulted in the GEE correctly classifying 49% of sitting, and 69% of standing activities. Using the 100 count/min-1 cut-point the data were re-categorised into a combined ‘sit/stand’ category and tested against other light activities: 88% of sit/stand and 87% of light activities were accurately predicted. Using Freedson’s moderate cut-point of 1952 count/min-1 the GEE accurately predicted 97% of light vs. 90% of moderate activities. CONCLUSION: The distributions of MTI recorded sitting and standing data overlap considerably, as such the 100 count/min -1 cut-point did not accurately isolate sitting from other static standing activities. The 100 count/min -1 cut-point more accurately predicted sit/stand vs. other movement orientated activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Open-source software systems have become a viable alternative to proprietary systems. We collected data on the usage of an open-source workflow management system developed by a university research group, and examined this data with a focus on how three different user cohorts – students, academics and industry professionals – develop behavioral intentions to use the system. Building upon a framework of motivational components, we examined the group differences in extrinsic versus intrinsic motivations on continued usage intentions. Our study provides a detailed understanding of the use of open-source workflow management systems in different user communities. Moreover, it discusses implications for the provision of workflow management systems, the user-specific management of open-source systems and the development of services in the wider user community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A better understanding of Open Source Innovation in Physical Product (OSIP) might allow project managers to mitigate risks associated with this innovation model and process, while developing the right strategies to maximise OSIP outputs. In the software industry, firms have been highly successful using Open Source Innovation (OSI) strategies. However, OSI in the physical world has not been studied leading to the research question: What advantages and disadvantages do organisations incur from using OSI in physical products? An exploratory research methodology supported by thirteen semi-structured interviews helped us build a seven-theme framework to categorise advantages and disadvantages elements linked with the use of OSIP. In addition, factors impacting advantage and disadvantage elements for firms using OSIP were identified as: „h Degree of openness in OSIP projects; „h Time of release of OSIP in the public domain; „h Use of Open Source Innovation in Software (OSIS) in conjunction with OSIP; „h Project management elements (Project oversight, scope and modularity); „h Firms. Corporate Social Responsibility (CSR) values; „h Value of the OSIP project to the community. This thesis makes a contribution to the body of innovation theory by identifying advantages and disadvantages elements of OSIP. Then, from a contingency perspective it identifies factors which enhance or decrease advantages, or mitigate/ or increase disadvantages of OSIP. In the end, the research clarifies the understanding of OSI by clearly setting OSIP apart from OSIS. The main practical contribution of this paper is to provide manager with a framework to better understand OSIP as well as providing a model, which identifies contingency factors increasing advantage and decreasing disadvantage. Overall, the research allows managers to make informed decisions about when they can use OSIP and how they can develop strategies to make OSIP a viable proposition. In addition, this paper demonstrates that advantages identified in OSIS cannot all be transferred to OSIP, thus OSIP decisions should not be based upon OSIS knowledge.