937 resultados para tool skype
Resumo:
What is the relationship between the design of regulations and levels of individual compliance? To answer this question, Crawford and Ostrom's institutional grammar tool is used to deconstruct regulations governing the aquaculture industry in Colorado, USA. Compliance with the deconstructed regulatory components is then assessed based on the perceptions of the appropriateness of the regulations, involvement in designing the regulations, and intrinsic and extrinsic motivations. The findings suggest that levels of compliance with regulations vary across and within individuals regarding various aspects of the regulatory components. As expected, the level of compliance is affected by the perceived appropriateness of regulations, participation in designing the regulations, and feelings of guilt and fear of social disapproval. Furthermore, there is a strong degree of interdependence among the written components, as identified by the institutional grammar tool, in affecting compliance levels. The paper contributes to the regulation and compliance literature by illustrating the utility of the institutional grammar tool in understanding regulatory content, applying a new Q-Sort technique for measuring individual levels of compliance, and providing a rare exploration into feelings of guilt and fear outside of the laboratory setting. © 2012 Blackwell Publishing Asia Pty Ltd.
Resumo:
A significant challenge in environmental toxicology is that many genetic and genomic tools available in laboratory models are not developed for commonly used environmental models. The Atlantic killifish (Fundulus heteroclitus) is one of the most studied teleost environmental models, yet few genetic or genomic tools have been developed for use in this species. The advancement of genetic and evolutionary toxicology will require that many of the tools developed in laboratory models be transferred into species more applicable to environmental toxicology. Antisense morpholino oligonucleotide (MO) gene knockdown technology has been widely utilized to study development in zebrafish and has been proven to be a powerful tool in toxicological investigations through direct manipulation of molecular pathways. To expand the utility of killifish as an environmental model, MO gene knockdown technology was adapted for use in Fundulus. Morpholino microinjection methods were altered to overcome the significant differences between these two species. Morpholino efficacy and functional duration were evaluated with molecular and phenotypic methods. A cytochrome P450-1A (CYP1A) MO was used to confirm effectiveness of the methodology. For CYP1A MO-injected embryos, a 70% reduction in CYP1A activity, a 86% reduction in total CYP1A protein, a significant increase in beta-naphthoflavone-induced teratogenicity, and estimates of functional duration (50% reduction in activity 10 dpf, and 86% reduction in total protein 12 dpf) conclusively demonstrated that MO technologies can be used effectively in killifish and will likely be just as informative as they have been in zebrafish.
Resumo:
Gemstone Team ILL (Interactive Language Learning)
Resumo:
Nolan and Temple Lang argue that “the ability to express statistical computations is an es- sential skill.” A key related capacity is the ability to conduct and present data analysis in a way that another person can understand and replicate. The copy-and-paste workflow that is an artifact of antiquated user-interface design makes reproducibility of statistical analysis more difficult, especially as data become increasingly complex and statistical methods become increasingly sophisticated. R Markdown is a new technology that makes creating fully-reproducible statistical analysis simple and painless. It provides a solution suitable not only for cutting edge research, but also for use in an introductory statistics course. We present experiential and statistical evidence that R Markdown can be used effectively in introductory statistics courses, and discuss its role in the rapidly-changing world of statistical computation.
Resumo:
Family dogs and dog owners offer a potentially powerful way to conduct citizen science to answer questions about animal behavior that are difficult to answer with more conventional approaches. Here we evaluate the quality of the first data on dog cognition collected by citizen scientists using the Dognition.com website. We conducted analyses to understand if data generated by over 500 citizen scientists replicates internally and in comparison to previously published findings. Half of participants participated for free while the other half paid for access. The website provided each participant a temperament questionnaire and instructions on how to conduct a series of ten cognitive tests. Participation required internet access, a dog and some common household items. Participants could record their responses on any PC, tablet or smartphone from anywhere in the world and data were retained on servers. Results from citizen scientists and their dogs replicated a number of previously described phenomena from conventional lab-based research. There was little evidence that citizen scientists manipulated their results. To illustrate the potential uses of relatively large samples of citizen science data, we then used factor analysis to examine individual differences across the cognitive tasks. The data were best explained by multiple factors in support of the hypothesis that nonhumans, including dogs, can evolve multiple cognitive domains that vary independently. This analysis suggests that in the future, citizen scientists will generate useful datasets that test hypotheses and answer questions as a complement to conventional laboratory techniques used to study dog psychology.
Resumo:
Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.
Resumo:
BACKGROUND: The detection of latent tuberculosis infection (LTBI) is a major component of tuberculosis (TB) control strategies. In addition to the tuberculosis skin test (TST), novel blood tests, based on in vitro release of IFN-gamma in response to Mycobacterium tuberculosis-specific antigens ESAT-6 and CFP-10 (IGRAs), are used for TB diagnosis. However, neither IGRAs nor the TST can separate acute TB from LTBI, and there is concern that responses in IGRAs may decline with time after infection. We have therefore evaluated the potential of the novel antigen heparin-binding hemagglutinin (HBHA) for in vitro detection of LTBI. METHODOLOGY AND PRINCIPAL FINDINGS: HBHA was compared to purified protein derivative (PPD) and ESAT-6 in IGRAs on lymphocytes drawn from 205 individuals living in Belgium, a country with low TB prevalence, where BCG vaccination is not routinely used. Among these subjects, 89 had active TB, 65 had LTBI, based on well-standardized TST reactions and 51 were negative controls. HBHA was significantly more sensitive than ESAT-6 and more specific than PPD for the detection of LTBI. PPD-based tests yielded 90.00% sensitivity and 70.00% specificity for the detection of LTBI, whereas the sensitivity and specificity for the ESAT-6-based tests were 40.74% and 90.91%, and those for the HBHA-based tests were 92.06% and 93.88%, respectively. The QuantiFERON-TB Gold In-Tube (QFT-IT) test applied on 20 LTBI subjects yielded 50% sensitivity. The HBHA IGRA was not influenced by prior BCG vaccination, and, in contrast to the QFT-IT test, remote (>2 years) infections were detected as well as recent (<2 years) infections by the HBHA-specific test. CONCLUSIONS: The use of ESAT-6- and CFP-10-based IGRAs may underestimate the incidence of LTBI, whereas the use of HBHA may combine the operational advantages of IGRAs with high sensitivity and specificity for latent infection.
Resumo:
Background:Patients with end-stage renal disease (ESRD) and latently infected with Mycobacterium tuberculosis (LTBI) are at higher risk to develop tuberculosis (TB) than healthy subjects. Interferon-gamma release assays (IGRAs) were reported to be more sensitive than tuberculin skin tests for the detection of infected individuals in dialysis patients.Methods:On 143 dialysis patients prospectively enrolled, we compared the results from the QuantiFERON®-TB Gold assay (QFT), to those of an IGRA in response to in vitro stimulation of circulating mononuclear cells with the mycobacterial latency antigen Heparin-Binding Haemagglutinin purified from Mycobacterium bovis BCG (native HBHA, nHBHA).Results:Seven patients had a past history of active TB and 1 had an undetermined result with both IGRAs. Among the other 135 patients, 94 had concordant results with the QFT and nHBHA-IGRA, 40.0% being negative and therefore not latently infected, and 29.6% being positive and thus LTBI. Discrepant results between these tests were found for 36 patients positive only with the nHBHA-IGRA and 5 only with the QFT.Conclusions:The nHBHA-IGRA is more sensitive than the QFT for the detection of LTBI dialysis patients, and follow-up of the patients will allow us to define the clinical significance of discrepant results between the nHBHA-IGRA and the QFT. © 2013 Dessein et al.
Resumo:
p.103-111
Resumo:
p.103-111
Resumo:
This paper addresses the exploitation of overlapping communication with calculation within parallel FORTRAN 77 codes for computational fluid dynamics (CFD) and computational structured dynamics (CSD). The obvious objective is to overlap interprocessor communication with calculation on each processor in a distributed memory parallel system and so improve the efficiency of the parallel implementation. A general strategy for converting synchronous to overlapped communication is presented together with tools to enable its automatic implementation in FORTRAN 77 codes. This strategy is then implemented within the parallelisation toolkit, CAPTools, to facilitate the automatic generation of parallel code with overlapped communications. The success of these tools are demonstrated on two codes from the NAS-PAR and PERFECT benchmark suites. In each case, the tools produce parallel code with overlapped communications which is as good as that which could be generated manually. The parallel performance of the codes also improve in line with expectation.
Resumo:
We provide a select overview of tools supporting traditional Jewish learning. Then we go on to discuss our own HyperJoseph/HyperIsaac project in instructional hypermedia. Its application is to teaching, teacher training, and self-instruction in given Bible passages. The treatment of two narratives has been developed thus far. The tool enables an analysis of the text in several respects: linguistic, narratological, etc. Moreover, the Scriptures' focality throughout the cultural history makes this domain of application particularly challenging, in that there is a requirement for the tool to encompass the accretion of receptions in the cultural repertoire, i.e., several layers of textual traditions—either hermeneutic (i.e., interpretive), or appropriations—related to the given core passage, thus including "secondary" texts (i.e., such that are responding or derivative) from as disparate realms as Roman-age and later homiletics, Medieval and later commentaries or supercommentaries, literary appropriations, references to the arts and modern scholarship, etc. in particular, the Midrash (homiletic expansions) is adept at narrative gap filling, so the narratives mushroom at the interstices where the primary text is silent. The genealogy of the project is rooted in Weiss' index of novelist Agnon's writings, which was eventually upgraded into a hypertextual tool, including Agnon's full-text and ancillary materials. Those early tools being intended primarily for reference and research-support in literary studies, the Agnon hypertext system was initially emulated in the conception of HyperJoseph, which is applied to the Joseph story from Genesis. Then, the transition from a tool for reference to an instructional tool required a thorough reconception in an educational perspective, which led to HyperIsaac, on the sacrifice of Isaac, and to a redesign and upgrade of HyperJoseph as patterned after HyperIsaac.
Resumo:
Daedalus is a computer tool, developed by an Italian magistrate - Carmelo Asaro - and integrated in his own daily routine as an investigating magistrate conducting inquiries, then as a prosecutor if and when the case investigated goes to court. This tool has recently been adopted by magistrates in judiciary offices throughout Italy, spawning moreover other related projects. First, this paper describes a sample session with daedalus. Next, an overview of an array of judicial tools leads to positioning daedalus in the context of the spectrum.
Resumo:
Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.