150 resultados para 151-913A
Resumo:
This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.
Resumo:
Design is a ubiquitous, collaborative and highly material activity. Because of the embodied nature of the design profession, designers apply certain collaborative practices to enhance creativity in their everyday work. Within the domain of industrial design, we studied two educational design departments over a period of eight months. Using examples from our fieldwork, we develop our results around three broad themes related to collaborative practices that support the creativity of design professionals: 1) externalization, 2) use of physical space, and; 3) use of bodies. We believe that these themes of collaborative practices could provide new insights into designing technologies for supporting a varied set of design activities. We describe two conceptual collaborative systems derived from the results of our study.
Resumo:
Multimedia communication capabilities are rapidly expanding, and visual information is easily shared electronically, yet funding bodies still rely on paper grant proposal submissions. Incorporating modern technologies will streamline the granting process by increasing the fidelity of grant communication, improving the efficiency of review, and reducing the cost of the process.
Resumo:
Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
The last few years have brought an increasing interest in the chemistry of rite interstellar and circumstellar environs. Many of the molecular species discovered in remote galactic regions have been dubbed 'non-terrestrial' because of their unique structures (Thaddeus et al, 1993). These findings have provided a challenge to chemists in many differing fields to attempt to generate these unusual species in the laboratory of particular recent interest have been the unsaturated hydrocarbon families, CnH and CnH2, which have been pursued by a number of diverse methodologies. A wine range of heterocumulenes, including CnO, HCnO, CnN, HCnN, CnS, HCnS, CnSi and HCnSi have also provided intriguing targets for laboratory experiments. Strictly the term cumulene refers to a class of compounds that possess a series of adjacent double bonds, with allene representing the simplest example (H2C=C=CH2). However for many of the non-terrestrial molecules presented here, the carbon chain cannot be described in terms of a single simple valence structure, and so we use the terms cumulene and heterocumulene in a more general sense: to describe molecular species that contain an unsaturated polycarbon chain. Mass spectrometry has proved an invaluable tool in the quest for interstellar cumulenes and heterocumulenes in the laboratory it has the ability in its many forms, to (i) generate charged analogs of these species in the gas phase, (ii) probe their connectivity, ion chemistry, and thermochemistry, and (iii) in some cases, elucidate the neutrals themselves. Here, we will discuss the progress of these studies to this time. (C) 1999 John Wiley & Sons, Inc.
Resumo:
Analogy plays a central role in legal reasoning, yet how to analogize is poorly taught and poorly practiced. We all recognize when legal analogies are being made: when a law professor suggests a difficult hypothetical in class and a student tentatively guesses at the answer based on the cases she read the night before, when an attorney advises a client to settle because a previous case goes against him, or when a judge adopts one precedent over another on the basis that it better fits the present case. However, when it comes to explaining why certain analogies are compelling, persuasive, or better than the alternative, lawyers usually draw a blank. The purpose of this article is to provide a simple model that can be used to teach and to learn how analogy actually works, and what makes one analogy superior to a competing analogy. The model is drawn from a number of theories of analogy making in cognitive science. Cognitive science is the “long-term enterprise to understand the mind scientifically.” The field studies the mechanisms that are involved in cognitive processes like thinking, memory, learning, and recall; and one of its main foci has been on how people construct analogies. The lessons from cognitive science theories of analogy can be applied to legal analogies to give students and lawyers a better understanding of this fundamental process in legal reasoning.
Resumo:
Human parathyroid hormone (hPTH) is currently the only treatment for osteoporosis that forms new bone. Previously we described a fish equivalent, Fugu parathyroid hormone 1 (fPth1) which has hPTH-like biological activity in vitro despite fPth1(1–34) sharing only 53% identity with hPTH(1–34). Here we demonstrate the in vivo actions of fPth1(1–34) on bone. In study 1, young male rats were injected intermittently for 30 days with fPth1 [30 μg–1000 μg/kg body weight (b.w.), (30fPth1–1000fPth1)] or hPTH [30 μg–100 μg/kg b.w. (30hPTH–100hPTH)]. In proximal tibiae at low doses, the fPth1 was positively correlated with trabecular bone volume/total volume (TbBV/TV) while hPTH increased TbBV/TV, trabecular thickness (TbTh) and trabecular number (TbN). 500fPth1 and 1000fPth1 increased TbBV/TV, TbTh, TbN, mineral apposition rate (MAR) and bone formation rate/bone surface (BFR/BS) with a concomitant decrease in osteoclast surface and number. In study 2 ovariectomized (OVX), osteopenic rats and sham operated (SHAM) rats were injected intermittently with 500 μg/kg b.w. of fPth1 (500fPth1) for 11 weeks. 500fPth1 treatment resulted in increased TbBV/TV (151%) and TbTh (96%) in the proximal tibiae due to increased bone formation as assessed by BFR/BS (490%) and MAR (131%). The effect was restoration of TbBV/TV to SHAM levels without any effect on bone resorption. 500fPth1 also increased TbBV/TV and TbTh in the vertebrae (L6) and cortical thickness in the mid-femora increasing bone strength at these sites. fPth1 was similarly effective in SHAM rats. Notwithstanding the low amino acid sequence homology with hPTH (1–34), we have clearly established the efficacy of fPth1 (1–34) as an anabolic bone agent.
Resumo:
This paper examines the use of connectionism (neural networks) in modelling legal reasoning. I discuss how the implementations of neural networks have failed to account for legal theoretical perspectives on adjudication. I criticise the use of neural networks in law, not because connectionism is inherently unsuitable in law, but rather because it has been done so poorly to date. The paper reviews a number of legal theories which provide a grounding for the use of neural networks in law. It then examines some implementations undertaken in law and criticises their legal theoretical naïvete. It then presents a lessons from the implementations which researchers must bear in mind if they wish to build neural networks which are justified by legal theories.
Resumo:
The assumptions underlying the Probability Ranking Principle (PRP) have led to a number of alternative approaches that cater or compensate for the PRP’s limitations. All alternatives deviate from the PRP by incorporating dependencies. This results in a re-ranking that promotes or demotes documents depending upon their relationship with the documents that have been already ranked. In this paper, we compare and contrast the behaviour of state-of-the-art ranking strategies and principles. To do so, we tease out analytical relationships between the ranking approaches and we investigate the document kinematics to visualise the effects of the different approaches on document ranking.
Resumo:
Objective To examine the relationship between pubertal timing and physical activity. Study design A longitudinal sample of 143 adolescent girls was assessed at ages 11 and 13 years. Girls' pubertal development was assessed at age 11 with blood estradiol levels, Tanner breast staging criteria, and parental report of pubertal development. Girls were classified as early maturers (n = 41) or later maturers (n = 102) on the basis of their scores on the 3 pubertal development measures. Dependent variables measured at age 13 were average minutes/day of moderate to vigorous and vigorous physical activity as measured by the ActiGraph accelerometer. Results Early-maturing girls had significantly lower self-reported physical activity and accumulated fewer minutes of moderate to vigorous and vigorous physical activity and accelerometer counts per day at age 13 than later maturing girls. These effects v.-ere independent of differences in percentage body fat and self-reported physical activity at age 11. Conclusion Girls experiencing early pubertal maturation at age 11 reported lower subsequent physical activity at age 13 than their later maturing peers. Pubertal maturation, in particular early maturation relative to peers, may lead to declines in physical activity among adolescent girls.
Resumo:
Protocols for bioassessment often relate changes in summary metrics that describe aspects of biotic assemblage structure and function to environmental stress. Biotic assessment using multimetric indices now forms the basis for setting regulatory standards for stream quality and a range of other goals related to water resource management in the USA and elsewhere. Biotic metrics are typically interpreted with reference to the expected natural state to evaluate whether a site is degraded. It is critical that natural variation in biotic metrics along environmental gradients is adequately accounted for, in order to quantify human disturbance-induced change. A common approach used in the IBI is to examine scatter plots of variation in a given metric along a single stream size surrogate and a fit a line (drawn by eye) to form the upper bound, and hence define the maximum likely value of a given metric in a site of a given environmental characteristic (termed the 'maximum species richness line' - MSRL). In this paper we examine whether the use of a single environmental descriptor and the MSRL is appropriate for defining the reference condition for a biotic metric (fish species richness) and for detecting human disturbance gradients in rivers of south-eastern Queensland, Australia. We compare the accuracy and precision of the MSRL approach based on single environmental predictors, with three regression-based prediction methods (Simple Linear Regression, Generalised Linear Modelling and Regression Tree modelling) that use (either singly or in combination) a set of landscape and local scale environmental variables as predictors of species richness. We compared the frequency of classification errors from each method against set biocriteria and contrast the ability of each method to accurately reflect human disturbance gradients at a large set of test sites. The results of this study suggest that the MSRL based upon variation in a single environmental descriptor could not accurately predict species richness at minimally disturbed sites when compared with SLR's based on equivalent environmental variables. Regression-based modelling incorporating multiple environmental variables as predictors more accurately explained natural variation in species richness than did simple models using single environmental predictors. Prediction error arising from the MSRL was substantially higher than for the regression methods and led to an increased frequency of Type I errors (incorrectly classing a site as disturbed). We suggest that problems with the MSRL arise from the inherent scoring procedure used and that it is limited to predicting variation in the dependent variable along a single environmental gradient.
Resumo:
Since the revisions to the International Health Regulations (IHR) in 2005, much attention has turned to two concerns relating to infectious disease control. The first is how to assist states to strengthen their capacity to identify and verify public health emergencies of international concern (PHEIC). The second is the question of how the World Health Organization (WHO) will operate its expanded mandate under the revised IHR. Very little attention has been paid to the potential individual power that has been afforded under the IHR revisions – primarily through the first inclusion of human rights principles into the instrument and the allowance for the WHO to receive non-state surveillance intelligence and informal reports of health emergencies. These inclusions mark the individual as a powerful actor, but also recognise the vulnerability of the individual to the whim of the state in outbreak response and containment. In this paper we examine why these changes to the IHR occurred and explore the consequence of expanding the sovereignty-as-responsibility concept to disease outbreak response. To this end our paper considers both the strengths and weaknesses of incorporating reports from non-official sources and including human rights principles in the IHR framework.
Resumo:
Sociological approaches to inquiry on emotion in educational settings are growing. Despite a long tradition of research and theory in disciplines such as psychology and sociology, the methods and approaches for naturalistic investigation of emotion are in a developmental phase in educational settings. In this article, recent empirical studies on emotion in educational contexts are canvassed. The discussion focuses on the use of multiple methods within research conducted in high school and university classrooms highlighting recent methodological progress. The methods discussed include facial expression analysis, verbal and non-verbal conduct, and self-report methods. Analyses drawn from different studies, informed by perspectives from microsociology, highlight the strengths and limitations of any one method. The power and limitations of multi-method approaches is discussed.
Resumo:
The statutory arrangements for the management of natural resources in Australia confer powers of decision-making upon government agencies and, at the same time, restrict how these powers are to be exercised by reference either to stated criteria or in some instances to the public interest. These restrictions perform different functions according to their structure, form and language: for example they may be in the form of jurisdictional, deliberative or purposive rules. This article reviews how the offshore resources legislation of the Commonwealth and some examples of the onshore resources legislation of Queensland address the functions performed by the public interest in determining whether there is compliance with the principle of the rule of law.
Resumo:
Food is a vital foundation of all human life. It is essential to a myriad of political, socio-cultural, economic and environmental practices throughout history. As Kaplan [1] contends, “the scholarship on food has real pedigree.” Today, practices of food production, consumption and distribution have the potential to go through immensely transformative shifts as network technologies become increasingly embedded in every domain of contemporary life. This presents unique opportunities for further scholarly exploration on this topic, which this special issue intends to address. Information and communication technologies (ICTs) are one of the pillars of contemporary global functionality and sustenance and undoubtedly will continue to present new challenges and opportunities for the future. As such, this special issue of Futures has been brought together to address challenges and opportunities at the intersection of food and ICTs. In particular, the edition asks, what are the key roles that network technologies play in re-shaping social and economic networks of food?