102 resultados para valeur informative


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background A complete explanation of the mechanisms by which Pb2+ exerts toxic effects on developmental central nervous system remains unknown. Glutamate is critical to the developing brain through various subtypes of ionotropic or metabotropic glutamate receptors (mGluRs). Ionotropic N-methyl-D-aspartate receptors have been considered as a principal target in lead-induced neurotoxicity. The relationship between mGluR3/mGluR7 and synaptic plasticity had been verified by many recent studies. The present study aimed to examine the role of mGluR3/mGluR7 in lead-induced neurotoxicity. Methods Twenty-four adult and female rats were randomly selected and placed on control or 0.2% lead acetate during gestation and lactation. Blood lead and hippocampal lead levels of pups were analyzed at weaning to evaluate the actual lead content at the end of the exposure. Impairments of short -term memory and long-term memory of pups were assessed by tests using Morris water maze and by detection of hippocampal ultrastructural alterations on electron microscopy. The impact of lead exposure on mGluR3 and mGluR7 mRNA expression in hippocampal tissue of pups were investigated by quantitative real-time polymerase chain reaction and its potential role in lead neurotoxicity were discussed. Results Lead levels of blood and hippocampi in the lead-exposed rats were significantly higher than those in the controls (P < 0.001). In tests using Morris Water Maze, the overall decrease in goal latency and swimming distance was taken to indicate that controls had shorter latencies and distance than lead-exposed rats (P = 0.001 and P < 0.001 by repeated-measures analysis of variance). On transmission electron microscopy neuronal ultrastructural alterations were observed and the results of real-time polymerase chain reaction showed that exposure to 0.2% lead acetate did not substantially change gene expression of mGluR3 and mGluR7 mRNA compared with controls. Conclusion Exposure to lead before and after birth can damage short-term and long-term memory ability of young rats and hippocampal ultrastructure. However, the current study does not provide evidence that the expression of rat hippocampal mGluR3 and mGluR7 can be altered by systemic administration of lead during gestation and lactation, which are informative for the field of lead-induced developmental neurotoxicity noting that it seems not to be worthwhile to include mGluR3 and mGluR7 in future studies. Background

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is a report of a study to explore what constitutes nurse-patient interactions and to ascertain patients' perceptions of these interactions. BACKGROUND: Nurses maintain patient integrity through caring practices. When patients feel disempowered or that their integrity is threatened they are more likely to make a complaint. When nurses develop a meaningful relationship with patients they recognize and address their concerns. It is increasingly identified in the literature that bureaucratic demands, including increased workloads and reduced staffing levels, result in situations where the development of a 'close' relationship is limited. METHOD: Data collection took two forms: twelve 4-hour observation periods of nurse-patient interactions in one cubicle (of four patients) in a medical and a surgical ward concurrently over a 4-week period; and questionnaires from inpatients of the two wards who were discharged during the 4-week data collection period in 2005. FINDINGS: Observation data showed that nurse-patient interactions were mostly friendly and informative. Opportunities to develop closeness were limited. Patients were mostly satisfied with interactions. The major source of dissatisfaction was when patients perceived that nurses were not readily available to respond to specific requests. Comparison of the observation and survey data indicated that patients still felt 'cared for' even when practices did not culminate in a 'connected' relationship. CONCLUSION: The findings suggest that patients believe that caring is demonstrated when nurses respond to specific requests. Patient satisfaction with the service is more likely to be improved if nurses can readily adapt their work to accommodate patients' requests or, alternatively, communicate why these requests cannot be immediately addressed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expert knowledge is valuable in many modelling endeavours, particularly where data is not extensive or sufficiently robust. In Bayesian statistics, expert opinion may be formulated as informative priors, to provide an honest reflection of the current state of knowledge, before updating this with new information. Technology is increasingly being exploited to help support the process of eliciting such information. This paper reviews the benefits that have been gained from utilizing technology in this way. These benefits can be structured within a six-step elicitation design framework proposed recently (Low Choy et al., 2009). We assume that the purpose of elicitation is to formulate a Bayesian statistical prior, either to provide a standalone expert-defined model, or for updating new data within a Bayesian analysis. We also assume that the model has been pre-specified before selecting the software. In this case, technology has the most to offer to: targeting what experts know (E2), eliciting and encoding expert opinions (E4), whilst enhancing accuracy (E5), and providing an effective and efficient protocol (E6). Benefits include: -providing an environment with familiar nuances (to make the expert comfortable) where experts can explore their knowledge from various perspectives (E2); -automating tedious or repetitive tasks, thereby minimizing calculation errors, as well as encouraging interaction between elicitors and experts (E5); -cognitive gains by educating users, enabling instant feedback (E2, E4-E5), and providing alternative methods of communicating assessments and feedback information, since experts think and learn differently; and -ensuring a repeatable and transparent protocol is used (E6).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SOMMARIO: 1. I fattori che incidono sulla funzione informativa del bilancio nelle imprese familiari. 2. Funzione, obiettivi e attese informative nella comunicazione esterna delle imprese familiari. 3. I caratteri del “familismo” nei prospetti di bilancio. 4. Verso un nuovo modello di bilancio per le imprese familiari: riflessioni critiche e spunti per la ricerca.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The DNA of three biological variants, G1, Ic and G2, which originated from the same greenhouse isolate of rice tungro bacilliform virus (RTBV) at the International Rice Research Institute (IRRI), was cloned and sequenced. Comparison of the sequences revealed small differences in genome sizes. The variants were between 95 and 99% identical at the nucleotide and amino acid levels. Alignment of the three genome sequences with those of three published RTBV sequences (Phi-1, Phi-2 and Phi-3) revealed numerous nucleotide substitutions and some insertions and deletions. The published RTBV sequences originated from the same greenhouse isolate at IRRI 20, 11 and 9 years ago. All open reading frames (ORFs) and known functional domains were conserved across the six variants. The cysteine-rich region of ORF3 showed the greatest variation. When the six DNA sequences from IRRI were compared with that of an isolate from Malaysia (Serdang), similar changes were observed in the cysteine-rich region in addition to other nucleotide substitutions and deletions across the genome. The aligned nucleotide sequences of the IRRI variants and Serdang were used to analyse phylogenetic relationships by the bootstrapped parsimony, distance and maximum-likelihood methods. The isolates clustered in three groups: Serdang alone; Ic and G1; and Phi-1, Phi-2, Phi-3 and G2. The distribution of phylogenetically informative residues in the IRRI sequences shared with the Serdang sequence and the differing tree topologies for segments of the genome suggested that recombination, as well as substitutions and insertions or deletions, has played a role in the evolution of RTBV variants. The significance and implications of these evolutionary forces are discussed in comparison with badnaviruses and caulimoviruses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recommender systems are widely used online to help users find other products, items etc that they may be interested in based on what is known about that user in their profile. Often however user profiles may be short on information and thus it is difficult for a recommender system to make quality recommendations. This problem is known as the cold-start problem. Here we investigate using association rules as a source of information to expand a user profile and thus avoid this problem. Our experiments show that it is possible to use association rules to noticeably improve the performance of a recommender system under the cold-start situation. Furthermore, we also show that the improvement in performance obtained can be achieved while using non-redundant rule sets. This shows that non-redundant rules do not cause a loss of information and are just as informative as a set of association rules that contain redundancy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rethinking Children and Research characterizes Mary Kellett’s vision as campaigner and sociologist actively working for and with children for many years. The book itself is not only visionary; it is informative, thought provoking and pragmatic. From a contemporary standpoint, the manuscript presents a detailed synopsis of the shifts in thinking about research with children and provides an appraisal of the theoretical movements that have driven a participatory research agenda. A strong theoretical approach of the combined lenses of sociologies of childhood and rights discourse is introduced early in the book. From the outset, the reader receives loud and clear, the key message of the book: that children in research should and can be included as competent members who lead research in the study of their everyday lives. The argument for a more mutual research approach is shaped throughout the book using research examples and practical suggestions on how this might be achieved. Overall, the reader is left feeling compelled to adopt such an approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The State Library of Queensland is delighted to present Lumia: art/light/motion, a culmination of many years of collaboration by the Kuuki collective led by Priscilla Bracks and Gavin Sade. This extraordinary exhibition not only showcases the unique talent of these Queenslanders, it also opens up a world of future possibilities while re-presenting the past and present. These contemporary new media installations sit comfortably within the walls of the library as they are the distinctive products of inquisitive and philosophical minds. In a sense the exhibition highlights the longevity and purposefulness of a cultural learning institution, through the non-traditional use of data, information, research and collection interpretation. The exhibition simultaneously articulates one of our key objectives – to progress the state’s digital agenda. Two academic essays have been commissioned for this joint Kuuki and State Library of Queensland publication. The first is by artist and writer Paul Brown, who has specialised in art, science and technology since the late 1960s and in computational and generative art since the mid 1970s. Brown investigates the history of new media, which is celebrating its 60th anniversary, and clearly places Sade and Bracks at the forefront of this genre nationally. The second essay is by arts writer Linda Carroli, who has delved deeply into the thoughts and processes of the artists to bring to light the complex workings of the artists’ minds. The publication also features an interview Carroli conducted with the artists. This exhibition is playful, informative and contemplative. The audience is invited to play, and consequently to ponder the way we live and the environmental and social implications of our choices. The exhibition tempts us to travel deep into the Antarctic, plunge into the Great Barrier Reef, be swamped by an orchestra of crickets, enter the Charmed world and travel back in time to a Victorian parlour where you can interact with a ‘new-world’ lyrebird and consider a brave new world where our only link to the animal world is with robotic representations. In essence this exhibition is about ideas and knowledge and what better institution than the State Library of Queensland to partner such a project?. State Library is committed to preserving culture, exploring new media and creating new content as a lasting legacy of Queensland for all Queenslanders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Future of Financial Regulation is an edited collection of papers presented at a major conference at the University of Glasgow in Spring 2009. It draws together a variety of different perspectives on the international financial crisis which began in August 2007 and later turned into a more widespread economic crisis following the collapse of Lehman Brothers in the Autumn of 2008. Spring 2009 was in many respects the nadir since valuations in financial markets had reached their low point and crisis management rather than regulatory reform was the main focus of attention. The conference and book were deliberately framed as an attempt to re-focus attention from the former to the latter. The first part of the book focuses on the context of the crisis, discussing the general characteristics of financial crises and the specific influences that were at work during this time. The second part focuses more specifically on regulatory techniques and practices implicated in the crisis, noting in particular an over-reliance on the capacity of regulators and financial institutions to manage risk and on the capacity of markets to self-correct. The third part focuses on the role of governance and ethics in the crisis and in particular the need for a common ethical framework to underpin governance practices and to provide greater clarity in the design of accountability mechanisms. The final part focuses on the trajectory of regulatory reform, noting the considerable potential for change as a result of the role of the state in the rescue and recuperation of the financial system and stressing the need for fundamental re-appraisal of business and regulatory models. This informative book will be of interest to financial regulators and theorists, commercial and financial law practitioners, and academics involved in the law and economics of regulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early detection surveillance programs aim to find invasions of exotic plant pests and diseases before they are too widespread to eradicate. However, the value of these programs can be difficult to justify when no positive detections are made. To demonstrate the value of pest absence information provided by these programs, we use a hierarchical Bayesian framework to model estimates of incursion extent with and without surveillance. A model for the latent invasion process provides the baseline against which surveillance data are assessed. Ecological knowledge and pest management criteria are introduced into the model using informative priors for invasion parameters. Observation models assimilate information from spatio-temporal presence/absence data to accommodate imperfect detection and generate posterior estimates of pest extent. When applied to an early detection program operating in Queensland, Australia, the framework demonstrates that this typical surveillance regime provides a modest reduction in the estimate that a surveyed district is infested. More importantly, the model suggests that early detection surveillance programs can provide a dramatic reduction in the putative area of incursion and therefore offer a substantial benefit to incursion management. By mapping spatial estimates of the point probability of infestation, the model identifies where future surveillance resources can be most effectively deployed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The practice of robotics and computer vision each involve the application of computational algorithms to data. The research community has developed a very large body of algorithms but for a newcomer to the field this can be quite daunting. For more than 10 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This new book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and computer vision. It is written in a light but informative style, it is easy to read and absorb, and includes over 1000 MATLAB® and Simulink® examples and figures. The book is a real walk through the fundamentals of mobile robots, navigation, localization, arm-robot kinematics, dynamics and joint level control, then camera models, image processing, feature extraction and multi-view geometry, and finally bringing it all together with an extensive discussion of visual servo systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean processes are dynamic, complex, and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path, while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show significant improvements in data resolution and path reliability compared to previously executed sampling paths used in the respective regions.