237 resultados para Analytic Reproducing Kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social Media (SM) is increasingly being integrated with business information in decision making. Unique characteristics of social media (e.g. wide accessibility, permanence, global audience, recentness, and ease of use) raise new issues with information quality (IQ); quite different from traditional considerations of IQ in information systems (IS) evaluation. This paper presents a preliminary conceptual model of information quality in social media (IQnSM) derived through directed content analysis and employing characteristics of analytic theory in the study protocol. Based in the notion of ‘fitness for use’, IQnSM is highly use and user centric and is defined as “the degree to which information is suitable for doing a specified task by a specific user, in a certain context”. IQnSM is operationalised as hierarchical, formed by the three dimensions (18 measures): intrinsic quality, contextual quality and representational quality. A research plan for empirically validating the model is proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finite Element modelling of bone fracture fixation systems allows computational investigation of the deformation response of the bone to load. Once validated, these models can be easily adapted to explore changes in design or configuration of a fixator. The deformation of the tissue within the fracture gap determines its healing and is often summarised as the stiffness of the construct. FE models capable of reproducing this behaviour would provide valuable insight into the healing potential of different fixation systems. Current model validation techniques lack depth in 6D load and deformation measurements. Other aspects of the FE model creation such as the definition of interfaces between components have also not been explored. This project investigated the mechanical testing and FE modelling of a bone– plate construct for the determination of stiffness. In depth 6D measurement and analysis of the generated forces, moments and movements showed large out of plane behaviours which had not previously been characterised. Stiffness calculated from the interfragmentary movement was found to be an unsuitable summary parameter as the error propagation is too large. Current FE modelling techniques were applied in compression and torsion mimicking the experimental setup. Compressive stiffness was well replicated, though torsional stiffness was not. The out of plane behaviours prevalent in the experimental work were not replicated in the model. The interfaces between the components were investigated experimentally and through modification to the FE model. Incorporation of the interface modelling techniques into the full construct models had no effect in compression but did act to reduce torsional stiffness bringing it closer to that of the experiment. The interface definitions had no effect on out of plane behaviours, which were still not replicated. Neither current nor novel FE modelling techniques were able to replicate the out of plane behaviours evident in the experimental work. New techniques for modelling loads and boundary conditions need to be developed to mimic the effects of the entire experimental system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of alternative fuels, such as biodiesels and related blends, it is important to develop an understanding of their effects on inter-cycle variability which, in turn, influences engine performance as well as its emission. Using four methanol trans-esterified biomass fuels of differing carbon chain length and degree of unsaturation, this paper provides insight into the effect that alternative fuels have on inter-cycle variability. The experiments were conducted with a heavy-duty Cummins, turbo-charged, common-rail compression ignition engine. Combustion performance is reported in terms of the following key in-cylinder parameters: indicated mean effective pressure (IMEP), net heat release rate (NHRR), standard deviation of variability (StDev), coefficient of variation (CoV), peak pressure, peak pressure timing and maximum rate of pressure rise. A link is also established between the cyclic variability and oxygen ratio, which is a good indicator of stoichiometry. The results show that the fatty acid structures did not have a significant effect on injection timing, injection duration, injection pressure, StDev of IMEP, or the timing of peak motoring and combustion pressures. However, a significant effect was noted on the premixed and diffusion combustion proportions, combustion peak pressure and maximum rate of pressure rise. Additionally, the boost pressure, IMEP and combustion peak pressure were found to be directly correlated to the oxygen ratio. The emission of particles positively correlates with oxygen content in the fuel as well as in the air-fuel mixture resulting in a higher total number of particles per unit of mass.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Selection of candidates for clinical psychology programmes is arguably the most important decision made in determining the clinical psychology workforce. However, there are few models to inform the development of selection tools to support selection procedures. The study, using a factor analytic structure, has operationalised the model predicting applicants' capabilities. Method Eighty-eight clinical applicants for entry into a postgraduate clinical psychology programme were assessed on a series of tasks measuring eight capabilities: guided reflection, communication skills, ethical decision making, writing, conceptual reasoning, empathy, and awareness of mind and self-observation. Results Factor analysis revealed three capabilities: labelled “awareness” accounting for 35.71% of variance; “reflection” accounting for 20.56%; and “reasoning” accounting for 18.24% of variance. Fourth year grade point average (GPA) did not correlate with performance on any of the selection capabilities other than a weak correlation with performance on the ethics capability. Conclusions Eight selection capabilities are identified for the selection of candidates independent of GPA. While the model is tentative, it is hoped that the findings will stimulate the development and validation of assessment procedures with good predictive validity which will benefit the training of clinical psychologists and, ultimately, effective service delivery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In late 2007, newly elected Prime Minister Kevin Rudd placed education reform on centre stage as a key policy in the Labor Party's agenda for social reform in Australia. A major policy strategy within this 'Education Revolution' was the development of a national curriculum, the Australian Curriculum Within this political context, this study is an investigation into how social justice and equity have been used in political speeches to justify the need for, and the nature of, Australia's first official national curriculum. The aim is to provide understandings into what is said or not said; who is included or excluded, represented or misrepresented; for what purpose; and for whose benefit. The study investigates political speeches made by Education Ministers between 2008 and 201 0; that is, from the inception of the Australian Curriculum to the release of the Phase 1 F - 10 draft curriculum documents in English, mathematics, science and history. Curriculum development is defined here as an ongoing process of complex conversations. To contextualise the process of curriculum development within Australia, the thesis commences with an initial review of curriculum development in this nation over the past three decades. It then frames this review within contemporary curriculum theory; in particular it calls upon the work of William Pinar and the key notions of currere and reconceptualised curriculum. This contextualisation work is then used as a foundation to examine how social justice and equity have been represented in political speeches delivered by the respective Education Ministers Julia Gillard and Peter Garrett at key junctures of Australian Curriculum document releases. A critical thematic policy analysis is the approach used to examine selected official speech transcripts released by the ministerial media centre through the DEEWR website. This approach provides a way to enable insights and understandings of representations of social justice and equity issues in the policy agenda. Broader social implications are also discussed. The project develops an analytic framework that enables an investigation into the framing of social justice and equity issues such as inclusion, equality, quality education, sharing of resources and access to learning opportunities in political speeches aligned with the development of the Australian Curriculum Through this analysis, the study adopts a focus on constructions of educationally disadvantaged students and how the solutions of 'fixing' teachers and providing the 'right' curriculum are presented as resolutions to the perceived problem. In this way, it aims to work towards offering insights into political justifications for a national curriculum in Australia from a social justice perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis concerns the mathematical model of moving fluid interfaces in a Hele-Shaw cell: an experimental device in which fluid flow is studied by sandwiching the fluid between two closely separated plates. Analytic and numerical methods are developed to gain new insights into interfacial stability and bubble evolution, and the influence of different boundary effects is examined. In particular, the properties of the velocity-dependent kinetic undercooling boundary condition are analysed, with regard to the selection of only discrete possible shapes of travelling fingers of fluid, the formation of corners on the interface, and the interaction of kinetic undercooling with the better known effect of surface tension. Explicit solutions to the problem of an expanding or contracting ring of fluid are also developed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In vivo small molecules as necessary intermediates are involved in numerous critical metabolic pathways and biological processes associated with many essential biological functions and events. There is growing evidence that MS-based metabolomics is emerging as a powerful tool to facilitate the discovery of functional small molecules that can better our understanding of development, infection, nutrition, disease, toxicity, drug therapeutics, gene modifications and host-pathogen interaction from metabolic perspectives. However, further progress must still be made in MS-based metabolomics because of the shortcomings in the current technologies and knowledge. This technique-driven review aims to explore the discovery of in vivo functional small molecules facilitated by MS-based metabolomics and to highlight the analytic capabilities and promising applications of this discovery strategy. Moreover, the biological significance of the discovery of in vivo functional small molecules with different biological contexts is also interrogated at a metabolic perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-periodic structural variation has been found in the high Tc cuprates, YBa2Cu3O7-x and Hg0.67Pb0.33Ba2Ca2Cu 3O8+δ, by image analysis of high resolution transmission electron microscope (HRTEM) images. We use two methods for analysis of the HRTEM images. The first method is a means for measuring the bending of lattice fringes at twin planes. The second method is a low-pass filter technique which enhances information contained by diffuse-scattered electrons and reveals what appears to be an interference effect between domains of differing lattice parameter in the top and bottom of the thin foil. We believe that these methods of image analysis could be usefully applied to the many thousands of HRTEM images that have been collected by other workers in the high temperature superconductor field. This work provides direct structural evidence for phase separation in high Tc cuprates, and gives support to recent stripes models that have been proposed to explain various angle resolved photoelectron spectroscopy and nuclear magnetic resonance data. We believe that the structural variation is a response to an opening of an electronic solubility gap where holes are not uniformly distributed in the material but are confined to metallic stripes. Optimum doping may occur as a consequence of the diffuse boundaries between stripes which arise from spinodal decomposition. Theoretical ideas about the high Tc cuprates which treat the cuprates as homogeneous may need to be modified in order to take account of this type of structural variation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Railway Bridges deteriorate over time due to different critical factors including, flood, wind, earthquake, collision, and environment factors, such as corrosion, wear, termite attack, etc. In current practice, the contributions of the critical factors, towards the deterioration of railway bridges, which show their criticalities, are not appropriately taken into account. In this paper, a new method for quantifying the criticality of these factors will be introduced. The available knowledge as well as risk analyses conducted in different Australian standards and developed for bridge-design will be adopted. The analytic hierarchy process (AHP) is utilized for prioritising the factors. The method is used for synthetic rating of railway bridges developed by the authors of this paper. Enhancing the reliability of predicting the vulnerability of railway bridges to the critical factors, will be the significant achievement of this research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early detection, clinical management and disease recurrence monitoring are critical areas in cancer treatment in which specific biomarker panels are likely to be very important in each of these key areas. We have previously demonstrated that levels of alpha-2-heremans-schmid-glycoprotein (AHSG), complement component C3 (C3), clusterin (CLI), haptoglobin (HP) and serum amyloid A (SAA) are significantly altered in serum from patients with squamous cell carcinoma of the lung. Here, we report the abundance levels for these proteins in serum samples from patients with advanced breast cancer, colorectal cancer (CRC) and lung cancer compared to healthy controls (age and gender matched) using commercially available enzyme-linked immunosorbent assay kits. Logistic regression (LR) models were fitted to the resulting data, and the classification ability of the proteins was evaluated using receiver-operating characteristic curve and leave-one-out cross-validation (LOOCV). The most accurate individual candidate biomarkers were C3 for breast cancer [area under the curve (AUC) = 0.89, LOOCV = 73%], CLI for CRC (AUC = 0.98, LOOCV = 90%), HP for small cell lung carcinoma (AUC = 0.97, LOOCV = 88%), C3 for lung adenocarcinoma (AUC = 0.94, LOOCV = 89%) and HP for squamous cell carcinoma of the lung (AUC = 0.94, LOOCV = 87%). The best dual combination of biomarkers using LR analysis were found to be AHSG + C3 (AUC = 0.91, LOOCV = 83%) for breast cancer, CLI + HP (AUC = 0.98, LOOCV = 92%) for CRC, C3 + SAA (AUC = 0.97, LOOCV = 91%) for small cell lung carcinoma and HP + SAA for both adenocarcinoma (AUC = 0.98, LOOCV = 96%) and squamous cell carcinoma of the lung (AUC = 0.98, LOOCV = 84%). The high AUC values reported here indicated that these candidate biomarkers have the potential to discriminate accurately between control and cancer groups both individually and in combination with other proteins. Copyright © 2011 UICC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poets have a licence to couch great truths in succinct, emotionally powerful, and perhaps slightly mysterious and ambiguous ways. On the other hand, it is the task of academics to explore such truths intellectually, in depth and detail, identifying the key constructs and their underlying relations and structures, hopefully without impairing the essential truth. So it could be said that in January 2013, around 60 academics gathered at the University of Texas, Austin under the benign and encouraging eye of their own muse, Professor Rod Hart, to play their role in exploring and explaining the underlying truth of Yan Zhen’s words. The goals of this chapter are quite broad. Rod was explicit and yet also somewhat Delphic in his expectations and aspirations for the chapter. Even though DICTION was a key analytic tool in most chapters, this chapter was not to be about DICTION per se, or simply a critique of the individual chapters forming this section of the book. Rather DICTION and these studies, as well as some others that got our attention, were to be more a launching pad for observations on what they revealed about the current state of understanding and research into the language of institutions, as well as some ‘adventurous’, but not too outlandish reflections on future challenges and opportunities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The occurrence of extreme movements in the spot price of electricity represents a significant source of risk to retailers. A range of approaches have been considered with respect to modelling electricity prices; these models, however, have relied on time-series approaches, which typically use restrictive decay schemes placing greater weight on more recent observations. This study develops an alternative, semi-parametric method for forecasting, which uses state-dependent weights derived from a kernel function. The forecasts that are obtained using this method are accurate and therefore potentially useful to electricity retailers in terms of risk management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivation: Gene silencing, also called RNA interference, requires reliable assessment of silencer impacts. A critical task is to find matches between silencer oligomers and sites in the genome, in accordance with one-to-many matching rules (G-U matching, with provision for mismatches). Fast search algorithms are required to support silencer impact assessments in procedures for designing effective silencer sequences.Results: The article presents a matching algorithm and data structures specialized for matching searches, including a kernel procedure that addresses a Boolean version of the database task called the skyline search. Besides exact matches, the algorithm is extended to allow for the location-specific mismatches applicable in plants. Computational tests show that the algorithm is significantly faster than suffix-tree alternatives. © The Author 2010. Published by Oxford University Press. All rights reserved.