925 resultados para domain analysis
Resumo:
The hyphenated technique of high performance liquid chromatography coupled with inductively coupled plasma mass spectrometry(HPLC-ICP-MS) was applied to the simultaneous determination of five organotin compounds in the shellfish samples. Agilent TC-C-18 column was selected, mobile phase of the HPLC was CH3CN:H2O: CH3COOH = 65:23:12 (V/V), 0. 05% TEA, pH = 3.0 at flow rate 0.4 mL/min. Five mixed organotin standards from 100 mu g/L to 0. 5 mu g/L was used for the method evaluation. The experimental results indicate that the linearity (R-2) for each compound was over 0.998. The shellfish samples were treated by supersonic extraction with mobile phase for 30min. Four organotin compounds including dibutyltin (DBT), tributyltin (TBT), diphenyltin (DphT) and triphenyltin (TPhT) in shellfish samples were detected with method mentioned above. It was found that the domain compounds in the samples were tributyltin (TBT) and triphenyltin (TPhT). The recoveries test from the standard addition for trimethyltin (TMT tributyltin (TBT), and triphenyltin (TPhT) were, over 80%. However, the recoveries for diphenyltin (DPhT) and dibutyltin (DBT) were relatively low, 37.3% and 75.2% respectively. The reason might be attributed to the decomposition of those compounds during the extraction procedure. The further study on this subject is under the progress.
Resumo:
With the rapid increase in low-cost and sophisticated digital technology the need for techniques to authenticate digital material will become more urgent. In this paper we address the problem of authenticating digital signals assuming no explicit prior knowledge of the original. The basic approach that we take is to assume that in the frequency domain a "natural" signal has weak higher-order statistical correlations. We then show that "un-natural" correlations are introduced if this signal is passed through a non-linearity (which would almost surely occur in the creation of a forgery). Techniques from polyspectral analysis are then used to detect the presence of these correlations. We review the basics of polyspectral analysis, show how and why these tools can be used in detecting forgeries and show their effectiveness in analyzing human speech.
Resumo:
This chapter examines the role of the advanced nurse practitioner (ANP) within the domains of practice identified by the Royal College of Nursing (2002) as the teaching and coaching function. (Note that this is referred to by the NMC as the education function. It approaches the analysis against the backdrop of three policy documents: The Expert Patient: a new approach to chronic disease management for the 21st century(DoH 2001), Choosing Health: making healthy choices easier (DoH 2004), Our health, our care, our say (DoH 2006). It draws into the frame the experiences of ANP students as they work with patients, clients and carers, with the intention of enabling health and managing illness. It uses examples from a range of everyday practice setting to illustrate the inherent challenges of the teaching and coaching function of the ANP, at the same time as recognising its significance if patients, clients and carers are to be enabled to make choices that might optimize their well-being. Before this, however, some statistics are presented to focus thinking on why education is an invaluable component of advanced nursing practice.
Resumo:
Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.
Resumo:
Helicobacter pylori is a gastric pathogen which infects ~50% of the global population and can lead to the development of gastritis, gastric and duodenal ulcers and carcinoma. Genome sequencing of H. pylori revealed high levels of genetic variability; this pathogen is known for its adaptability due to mechanisms including phase variation, recombination and horizontal gene transfer. Motility is essential for efficient colonisation by H. pylori. The flagellum is a complex nanomachine which has been studied in detail in E. coli and Salmonella. In H. pylori, key differences have been identified in the regulation of flagellum biogenesis, warranting further investigation. In this study, the genomes of two H. pylori strains (CCUG 17874 and P79) were sequenced and published as draft genome sequences. Comparative studies identified the potential role of restriction modification systems and the comB locus in transformation efficiency differences between these strains. Core genome analysis of 43 H. pylori strains including 17874 and P79 defined a more refined core genome for the species than previously published. Comparative analysis of the genome sequences of strains isolated from individuals suffering from H. pylori related diseases resulted in the identification of “disease-specific” genes. Structure-function analysis of the essential motility protein HP0958 was performed to elucidate its role during flagellum assembly in H. pylori. The previously reported HP0958-FliH interaction could not be substantiated in this study and appears to be a false positive. Site-directed mutagenesis confirmed that the coiled-coil domain of HP0958 is involved in the interaction with RpoN (74-284), while the Zn-finger domain is required for direct interaction with the full length flaA mRNA transcript. Complementation of a non-motile hp0958-null derivative strain of P79 with site-directed mutant alleles of hp0958 resulted in cells producing flagellar-type extrusions from non-polar positions. Thus, HP0958 may have a novel function in spatial localisation of flagella in H. pylori
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
The Leaving Certificate (LC) is the national, standardised state examination in Ireland necessary for entry to third level education – this presents a massive, raw corpus of data with the potential to yield invaluable insight into the phenomena of learner interlanguage. With samples of official LC Spanish examination data, this project has compiled a digitised corpus of learner Spanish comprised of the written and oral production of 100 candidates. This corpus was then analysed using a specific investigative corpus technique, Computer-aided Error Analysis (CEA, Dagneaux et al, 1998). CEA is a powerful apparatus in that it greatly facilitates the quantification and analysis of a large learner corpus in digital format. The corpus was both compiled and analysed with the use of UAM Corpus Tool (O’Donnell 2013). This Tool allows for the recording of candidate-specific variables such as grade, examination level, task type and gender, therefore allowing for critical analysis of the corpus as one unit, as separate written and oral sub corpora and also of performance per task, level and gender. This is an interdisciplinary work combining aspects of Applied Linguistics, Learner Corpus Research and Foreign Language (FL) Learning. Beginning with a review of the context of FL learning in Ireland and Europe, I go on to discuss the disciplinary context and theoretical framework for this work and outline the methodology applied. I then perform detailed quantitative and qualitative analyses before going on to combine all research findings outlining principal conclusions. This investigation does not make a priori assumptions about the data set, the LC Spanish examination, the context of FLs or of any aspect of learner competence. It undertakes to provide the linguistic research community and the domain of Spanish language learning and pedagogy in Ireland with an empirical, descriptive profile of real learner performance, characterising learner difficulty.
Resumo:
Somatostatin receptor 2 (SSTR2) is expressed by most medulloblastomas (MEDs). We isolated monoclonal antibodies (MAbs) to the 12-mer (33)QTEPYYDLTSNA(44), which resides in the extracellular domain of the SSTR2 amino terminus, screened the peptide-bound MAbs by fluorescence microassay on D341 and D283 MED cells, and demonstrated homogeneous cell-surface binding, indicating that all cells expressed cell surface-detectable epitopes. Five radiolabeled MAbs were tested for immunoreactive fraction (IRF), affinity (KA) (Scatchard analysis vs. D341 MED cells), and internalization by MED cells. One IgG(3) MAb exhibited a 50-100% IRF, but low KA. Four IgG(2a) MAbs had 46-94% IRFs and modest KAs versus intact cells (0.21-1.2 x 10(8) M(-1)). Following binding of radiolabeled MAbs to D341 MED at 4 degrees C, no significant internalization was observed, which is consistent with results obtained in the absence of ligand. However, all MAbs exhibited long-term association with the cells; binding at 37 degrees C after 2 h was 65-66%, and after 24 h, 52-64%. In tests with MAbs C10 and H5, the number of cell surface receptors per cell, estimated by Scatchard and quantitative FACS analyses, was 3.9 x 10(4) for the "glial" phenotype DAOY MED cell line and 0.6-8.8 x 10(5) for four neuronal phenotype MED cell lines. Our results indicate a potential immunotherapeutic application for these MAbs.
Resumo:
We present measurements of morphological features in a thick turbid sample using light-scattering spectroscopy (LSS) and Fourier-domain low-coherence interferometry (fLCI) by processing with the dual-window (DW) method. A parallel frequency domain optical coherence tomography (OCT) system with a white-light source is used to image a two-layer phantom containing polystyrene beads of diameters 4.00 and 6.98 mum on the top and bottom layers, respectively. The DW method decomposes each OCT A-scan into a time-frequency distribution with simultaneously high spectral and spatial resolution. The spectral information from localized regions in the sample is used to determine scatterer structure. The results show that the two scatterer populations can be differentiated using LSS and fLCI.
Resumo:
Axisymmetric radiating and scattering structures whose rotational invariance is broken by non-axisymmetric excitations present an important class of problems in electromagnetics. For such problems, a cylindrical wave decomposition formalism can be used to efficiently obtain numerical solutions to the full-wave frequency-domain problem. Often, the far-field, or Fraunhofer region is of particular interest in scattering cross-section and radiation pattern calculations; yet, it is usually impractical to compute full-wave solutions for this region. Here, we propose a generalization of the Stratton-Chu far-field integral adapted for 2.5D formalism. The integration over a closed, axially symmetric surface is analytically reduced to a line integral on a meridional plane. We benchmark this computational technique by comparing it with analytical Mie solutions for a plasmonic nanoparticle, and apply it to the design of a three-dimensional polarization-insensitive cloak.
Resumo:
© 2013 American Psychological Association.This meta-analysis synthesizes research on the effectiveness of intelligent tutoring systems (ITS) for college students. Thirty-five reports were found containing 39 studies assessing the effectiveness of 22 types of ITS in higher education settings. Most frequently studied were AutoTutor, Assessment and Learning in Knowledge Spaces, eXtended Tutor-Expert System, and Web Interface for Statistics Education. Major findings include (a) Overall, ITS had a moderate positive effect on college students' academic learning (g = .32 to g = .37); (b) ITS were less effective than human tutoring, but they outperformed all other instruction methods and learning activities, including traditional classroom instruction, reading printed text or computerized materials, computer-assisted instruction, laboratory or homework assignments, and no-treatment control; (c) ITS's effectiveness did not significantly differ by different ITS, subject domain, or the manner or degree of their involvement in instruction and learning; and (d) effectiveness in earlier studies appeared to be significantly greater than that in more recent studies. In addition, there is some evidence suggesting the importance of teachers and pedagogy in ITS-assisted learning.
Resumo:
Human alpha-lactalbumin (alpha-LA), a 123-residue calcium-binding protein, has been studied using (15)N NMR relaxation methods in order to characterize backbone dynamics of the native state at the level of individual residues. Relaxation data were collected at three magnetic field strengths and analyzed using the model-free formalism of Lipari and Szabo. The order parameters derived from this analysis are generally high, indicating a rigid backbone. A total of 46 residues required an exchange contribution to T(2); 43 of these residues are located in the alpha-domain of the protein. The largest exchange contributions are observed in the A-, B-, D-, and C-terminal 3(10)-helices of the alpha-domain; these residues have been shown previously to form a highly stable core in the alpha-LA molten globule. The observed exchange broadening, along with previous hydrogen/deuterium amide exchange data, suggests that this part of the alpha-domain may undergo a local structural transition between the well-ordered native structure and a less-ordered molten-globule-like structure.
Resumo:
Helicobacter pylori is a human pathogen that colonizes about 50% of the world's population, causing chronic gastritis, duodenal ulcers and even gastric cancer. A steady emergence of multiple antibiotic resistant strains poses an important public health threat and there is an urgent requirement for alternative therapeutics. The blood group antigen-binding adhesin BabA mediates the intimate attachment to the host mucosa and forms a major candidate for novel vaccine and drug development. Here, the recombinant expression and crystallization of a soluble BabA truncation (BabA25-460) corresponding to the predicted extracellular adhesin domain of the protein are reported. X-ray diffraction data for nanobody-stabilized BabA25-460 were collected to 2.25Å resolution from a crystal that belonged to space group P21, with unit-cell parameters a = 50.96, b = 131.41, c = 123.40Å, α = 90.0, β = 94.8, γ = 90.0°, and which was predicted to contain two BabA25-460-nanobody complexes per asymmetric unit.
Resumo:
Immunoglobulin superfamily (IgSF) domains are conserved structures present in many proteins in eukaryotes and prokaryotes. These domains are well-capable of facilitating sequence variation, which is most clearly illustrated by the variable regions in immunoglobulins (Igs) and T cell receptors (TRs). We studied an antibody-deficient patient suffering from recurrent respiratory infections and with impaired antibody responses to vaccinations. Patient's B cells showed impaired Ca(2+) influx upon stimulation with anti-IgM and lacked detectable CD19 membrane expression. CD19 sequence analysis revealed a homozygous missense mutation resulting in a tryptophan to cystein (W52C) amino acid change. The affected tryptophan is CONSERVED-TRP 41 located on the C-strand of the first extracellular IgSF domain of CD19 and was found to be highly conserved, not only in mammalian CD19 proteins, but in nearly all characterized IgSF domains. Furthermore, the tryptophan is present in all variable domains in Ig and TR and was not mutated in 117 Ig class-switched transcripts of B cells from controls, despite an overall 10% amino acid change frequency. In vitro complementation studies and CD19 western blotting of patient's B cells demonstrated that the mutated protein remained immaturely glycosylated. This first missense mutation resulting in a CD19 deficiency demonstrates the crucial role of a highly conserved tryptophan in proper folding or stability of IgSF domains.
Resumo:
A parallel time-domain algorithm is described for the time-dependent nonlinear Black-Scholes equation, which may be used to build financial analysis tools to help traders making rapid and systematic evaluation of buy/sell contracts. The algorithm is particularly suitable for problems that do not require fine details at each intermediate time step, and hence the method applies well for the present problem.