818 resultados para Hiking -- Tools and equipment
Resumo:
Environmental quality monitoring of water resources is challenged with providing the basis for safeguarding the environment against adverse biological effects of anthropogenic chemical contamination from diffuse and point sources. While current regulatory efforts focus on monitoring and assessing a few legacy chemicals, many more anthropogenic chemicals can be detected simultaneously in our aquatic resources. However, exposure to chemical mixtures does not necessarily translate into adverse biological effects nor clearly shows whether mitigation measures are needed. Thus, the question which mixtures are present and which have associated combined effects becomes central for defining adequate monitoring and assessment strategies. Here we describe the vision of the international, EU-funded project SOLUTIONS, where three routes are explored to link the occurrence of chemical mixtures at specific sites to the assessment of adverse biological combination effects. First of all, multi-residue target and non-target screening techniques covering a broader range of anticipated chemicals co-occurring in the environment are being developed. By improving sensitivity and detection limits for known bioactive compounds of concern, new analytical chemistry data for multiple components can be obtained and used to characterise priority mixtures. This information on chemical occurrence will be used to predict mixture toxicity and to derive combined effect estimates suitable for advancing environmental quality standards. Secondly, bioanalytical tools will be explored to provide aggregate bioactivity measures integrating all components that produce common (adverse) outcomes even for mixtures of varying compositions. The ambition is to provide comprehensive arrays of effect-based tools and trait-based field observations that link multiple chemical exposures to various environmental protection goals more directly and to provide improved in situ observations for impact assessment of mixtures. Thirdly, effect-directed analysis (EDA) will be applied to identify major drivers of mixture toxicity. Refinements of EDA include the use of statistical approaches with monitoring information for guidance of experimental EDA studies. These three approaches will be explored using case studies at the Danube and Rhine river basins as well as rivers of the Iberian Peninsula. The synthesis of findings will be organised to provide guidance for future solution-oriented environmental monitoring and explore more systematic ways to assess mixture exposures and combination effects in future water quality monitoring.
Resumo:
Beryllium is a widely distributed, highly toxic metal. When beryllium particulates enter the body, the body's defense mechanisms are engaged. When the body's defenses cannot easily remove the particulates, then a damage and repair cycle is initiated. This cycle produces chronic beryllium disease (CBD), a progressive, fibrotic respiratory involvement which eventually suffocates exposed individuals. ^ Beryllium disease is an occupational disease, and as such it can be prevented by limiting exposures. In the 1940s journalists reported beryllium deaths at Atomic Energy Commission (AEC) facilities, the Department of Energy's (DOE) predecessor organization. These reports energized public pressure for exposure limits, and in 1949 AEC implemented a 2 μg/m3 permissible exposure limit (PEL). ^ The limits appeared to stop acute disease. In contrast, CBD has a long latency period between exposure and diagnosable disease, between one and thirty years. The lack of immediate adverse health consequences masked the seriousness of chronic disease and pragmatically removed CBD from AEC/DOE's political concern. ^ Presently the PEL for beryllium at DOE sites remains at 2 μg/m 3. This limit does not prevent CBD. This conclusion has long been known, although denied until recently. In 1999 DOE acknowledged the limit's ineffectiveness in its federal regulation governing beryllium exposure, 10 CFR 850. ^ Despite this admission, the PEL has not been reduced. The beryllium manufacturer and AEC/DOE have a history of exerting efforts to maintain and protect the status quo. Primary amongst these efforts has been creation and promotion of disinformation within peer reviewed health literature which discusses beryllium, exposures, health effects and treatment, and targeting graduate school students so that their perspective is shaped early. ^ Once indoctrinated with incorrect information, professionals tend to overlook aerosol and respiratory mechanics, immunologic and carcinogenic factors. They then apply tools and perspectives derived from the beryllium manufacturer and DOE's propaganda. Conclusions drawn are incorrect. The result is: health research and associated policy is conducted with incorrect premises. Effective disease management practices are not implemented. ^ Public health protection requires recognition of the disinformation and its implications. When disinformation is identified, then effective health policies and practices can be developed and implemented. ^
Resumo:
In the complex landscape of public education, participants at all levels are searching for policy and practice levers that can raise overall performance and close achievement gaps. The collection of articles in this edition of the Journal of Applied Research on Children takes a big step toward providing the tools and tactics needed for an evidence-based approach to educational policy and practice.
Resumo:
The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
The purpose of this study was to design, synthesize and develop novel transporter targeting agents for image-guided therapy and drug delivery. Two novel agents, N4-guanine (N4amG) and glycopeptide (GP) were synthesized for tumor cell proliferation assessment and cancer theranostic platform, respectively. N4amG and GP were synthesized and radiolabeled with 99mTc and 68Ga. The chemical and radiochemical purities as well as radiochemical stabilities of radiolabeled N4amG and GP were tested. In vitro stability assessment showed both 99mTc-N4amG and 99mTc-GP were stable up to 6 hours, whereas 68Ga-GP was stable up to 2 hours. Cell culture studies confirmed radiolabeled N4amG and GP could penetrate the cell membrane through nucleoside transporters and amino acid transporters, respectively. Up to 40% of intracellular 99mTc-N4amG and 99mTc-GP was found within cell nucleus following 2 hours of incubation. Flow cytometry analysis revealed 99mTc-N4amG was a cell cycle S phase-specific agent. There was a significant difference of the uptake of 99mTc-GP between pre- and post- paclitaxel-treated cells, which suggests that 99mTc-GP may be useful in chemotherapy treatment monitoring. Moreover, radiolabeled N4amG and GP were tested in vivo using tumor-bearing animal models. 99mTc-N4amG showed an increase in tumor-to-muscle count density ratios up to 5 at 4 hour imaging. Both 99mTc-labeled agents showed decreased tumor uptake after paclitaxel treatment. Immunohistochemistry analysis demonstrated that the uptake of 99mTc-N4amG was correlated with Ki-67 expression. Both 99mTc-N4amG and 99mTc-GP could differentiate between tumor and inflammation in animal studies. Furthermore, 68Ga-GP was compared to 18F-FDG in rabbit PET imaging studies. 68Ga-GP had lower tumor standardized uptake values (SUV), but similar uptake dynamics, and different biodistribution compared with 18F-FDG. Finally, to demonstrate that GP can be a potential drug carrier for cancer theranostics, several drugs, including doxorubicin, were selected to be conjugated to GP. Imaging studies demonstrated that tumor uptake of GP-drug conjugates was increased as a function of time. GP-doxorubicin (GP-DOX) showed a slow-release pattern in in vitro cytotoxicity assay and exhibited anti-cancer efficacy with reduced toxicity in in vivo tumor growth delay study. In conclusion, both N4amG and GP are transporter-based targeting agents. Radiolabeled N4amG can be used for tumor cell proliferation assessment. GP is a potential agent for image-guided therapy and drug delivery.
Resumo:
Documented risks of physical activity include reduced bone mineral density at high activity volume, and sudden cardiac death among adults and adolescents. Further illumination of these risks is needed to inform future public health guidelines. The present research seeks to 1) quantify the association between physical activity and bone mineral density (BMD) across a broad range of activity volume, 2) assess the utility of an existing pre-screening questionnaire among US adults, and 3) determine if pre-screening risk stratification by questionnaire predicts referral to physician among Texas adolescents. ^ Among 9,468 adults 20 years of age or older in the National Health and Nutrition Examination Survey (NHANES) 2007-2010, linear regression analyses revealed generally higher BMD at the lumbar spine and proximal femur with greater reported activity volume. Only lumbar BMD in women was unassociated with activity volume. Among men, BMD was similar at activity beyond four times the minimum volume recommended in the Physical Activity Guidelines. These results suggest that the range of activity reported by US adults is not associated with low BMD at either site. ^ The American Heart Association / American College of Sports Medicine Preparticipation Questionnaire (AAPQ) was applied to 6,661 adults 40 years of age or older from NHANES 2001-2004 by using NHANES responses to complete AAPQ items. Following AAPQ referral criteria, 95.5% of women and 93.5% of men would be referred to a physician before exercise initiation, suggesting little utility for the AAPQ among adults aged 40 years or older. Unnecessary referral before exercise initiation may present a barrier to exercise adoption and may strain an already stressed healthcare infrastructure. ^ Among 3181 athletes in the Texas Adolescent Athlete Heart Screening Registry, 55.2% of boys and 62.2% of girls were classified as high-risk based on questionnaire answers. Using sex-stratified contingency table analyses, risk categories were not significantly associated with referral to physician based on electrocardiogram or echocardiogram, nor were they associated with confirmed diagnoses on follow-up. Additional research is needed to identify which symptoms are most closely related to sudden cardiac death, and determine the best methods for rapid and reliable assessment. ^ In conclusion, this research suggests that the volume of activity reported by US adults is not associated with low BMD at two clinically relevant sites, casts doubts on the utility of two existing cardiac screening tools, and raises concern about barriers to activity erected through ineffective screening. These findings augment existing research in this area that may inform revisions to the Physical Activity Guidelines regarding risk mitigation.^
Resumo:
This cross-sectional analysis of the data from the Third National Health and Nutrition Examination Survey was conducted to determine the prevalence and determinants of asthma and wheezing among US adults, and to identify the occupations and industries at high risk of developing work-related asthma and work-related wheezing. Separate logistic models were developed for physician-diagnosed asthma (MD asthma), wheezing in the previous 12 months (wheezing), work-related asthma and work-related wheezing. Major risk factors including demographic, socioeconomic, indoor air quality, allergy, and other characteristics were analyzed. The prevalence of lifetime MD asthma was 7.7% and the prevalence of wheezing was 17.2%. Mexican-Americans exhibited the lowest prevalence of MD asthma (4.8%; 95% confidence interval (CI): 4.2, 5.4) when compared to other race-ethnic groups. The prevalence of MD asthma or wheezing did not vary by gender. Multiple logistic regression analysis showed that Mexican-Americans were less likely to develop MD asthma (adjusted odds ratio (ORa) = 0.64, 95%CI: 0.45, 0.90) and wheezing (ORa = 0.55, 95%CI: 0.44, 0.69) when compared to non-Hispanic whites. Low education level, current and past smoking status, pet ownership, lifetime diagnosis of physician-diagnosed hay fever and obesity were all significantly associated with MD asthma and wheezing. No significant effect of indoor air pollutants on asthma and wheezing was observed in this study. The prevalence of work-related asthma was 3.70% (95%CI: 2.88, 4.52) and the prevalence of work-related wheezing was 11.46% (95%CI: 9.87, 13.05). The major occupations identified at risk of developing work-related asthma and wheezing were cleaners; farm and agriculture related occupations; entertainment related occupations; protective service occupations; construction; mechanics and repairers; textile; fabricators and assemblers; other transportation and material moving occupations; freight, stock and material movers; motor vehicle operators; and equipment cleaners. The population attributable risk for work-related asthma and wheeze were 26% and 27% respectively. The major industries identified at risk of work-related asthma and wheeze include entertainment related industry; agriculture, forestry and fishing; construction; electrical machinery; repair services; and lodging places. The population attributable risk for work-related asthma was 36.5% and work-related wheezing was 28.5% for industries. Asthma remains an important public health issue in the US and in the other regions of the world. ^
Resumo:
This presentation explains a dozen tools and paradigm shifts that teachers should apply in transformative ways to working with their students, how Web 2.0, tagging, and RSS are crucial to this process, and how teachers can develop their own personal learning networks to practice continuous lifelong learning and 'teacher autonomy' before applying these concepts to students.
Resumo:
This presentation explains a dozen tools and paradigm shifts that teachers should apply in transformative ways to working with their students, how Web 2.0, tagging, and RSS are crucial to this process, and how teachers can develop their own personal learning networks to practice continuous lifelong learning and 'teacher autonomy' before applying these concepts to students.
Resumo:
This presentation explains a dozen tools and paradigm shifts that teachers should apply in transformative ways to working with their students, how Web 2.0, tagging, and RSS are crucial to this process, and how teachers can develop their own personal learning networks to practice continuous lifelong learning and 'teacher autonomy' before applying these concepts to students.
A repository for integration of software artifacts with dependency resolution and federation support
Resumo:
While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.
Resumo:
Profiting by the increasing availability of laser sources delivering intensities above 10 9 W/cm 2 with pulse energies in the range of several Joules and pulse widths in the range of nanoseconds, laser shock processing (LSP) is being consolidating as an effective technology for the improvement of surface mechanical and corrosion resistance properties of metals and is being developed as a practical process amenable to production engineering. The main acknowledged advantage of the laser shock processing technique consists on its capability of inducing a relatively deep compression residual stresses field into metallic alloy pieces allowing an improved mechanical behaviour, explicitly, the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Following a short description of the theoretical/computational and experimental methods developed by the authors for the predictive assessment and experimental implementation of LSP treatments, experimental results on the residual stress profiles and associated surface properties modification successfully reached in typical materials (specifically steels and Al and Ti alloys) under different LSP irradiation conditions are presented
Resumo:
The dramatic impact of neurological degenerative pathologies in life quality is a growing concern. It is well known that many neurological diseases leave a fingerprint in voice and speech production. Many techniques have been designed for the detection, diagnose and monitoring the neurological disease. Most of them are costly or difficult to extend to primary attention medical services. Through the present paper it will be shown how some neurological diseases can be traced at the level of phonation. The detection procedure would be based on a simple voice test. The availability of advanced tools and methodologies to monitor the organic pathology of voice would facilitate the implantation of these tests. The paper hypothesizes that some of the underlying mechanisms affecting the production of voice produce measurable correlates in vocal fold biomechanics. A general description of the methodological foundations for the voice analysis system which can estimate correlates to the neurological disease is shown. Some study cases will be presented to illustrate the possibilities of the methodology to monitor neurological diseases by voice
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based