83 resultados para Zero sets of bivariate polynomials
Resumo:
Cryptic plasmids were found in Rhodococcus rhodochrous NCIMB13064 derivatives which had lost the ability to utilize short-chain 1-chloroalkanes (chain length C-3-C-10) and had acquired the ability to degrade naphthalene. The reversions of these derivatives to the original phenotype were accompanied by the loss of the cryptic plasmids. The 4969-bp pKA22 plasmid was cloned in Escherichia coli and sequenced. This plasmid encodes a putative 33,200-Da protein which contains motifs typical of theta replicase proteins and shows a high degree of similarity to a putative theta replicase from Brevibacterium linens plasmid pRBL1 and to a putative protein encoded by ORF1 of the plasmid pAL5000 from Mycobacterium fortuitum. Two sets of long direct repeats were found in pKA22 which may be involved in the replication of the plasmid and recombination processes. (C) 1997 Academic Press.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
Detection of growth-promoter use in animal production systems still proves to be an analytical challenge despite years of activity in the field. This study reports on the capability of NMR metabolomic profiling techniques to discriminate between plasma samples obtained from cattle treated with different groups of growth-promoting hormones (dexamethasone, prednisolone, oestradiol) based on recorded metabolite profiles. Two methods of NMR analysis were investigated—a Carr–Purcell–Meiboom–Gill (CPMG)-pulse sequence technique and a conventional 1H NMR method using pre-extracted plasma. Using the CPMG method, 17 distinct metabolites could be identified from the spectra. 1H NMR analysis of extracted plasma facilitated identification of 23 metabolites—six more than the alternative method and all within the aromatic region. Multivariate statistical analysis of acquired data from both forms of NMR analysis separated the plasma metabolite profiles into distinct sample cluster sets representative of the different animal study groups. Samples from both sets of corticosteroid-treated animals—dexamethasone and prednisolone—were found to be clustered relatively closely and had similar alterations to identified metabolite panels. Distinctive metabolite profiles, different from those observed within plasma from corticosteroid-treated animal plasma, were observed in oestradiol-treated animals and samples from these animals formed a cluster spatially isolated from control animal plasma samples. These findings suggest the potential use of NMR methodologies of plasma metabolite analysis as a high-throughput screening technique to aid detection of growth promoter use.
Resumo:
In the present study an experimental investigation of the time-averaged velocity and turbulence intensity distributions from a ship’s propeller, in “bollard pull” condition (zero speed of advance), is reported. Previous studies have focused mainly on the velocity profile of not a rotating ship propeller but a plain jet. The velocity profile of a propeller is investigated experimentally in this study.
The velocity measurements were performed in laboratory by using a Laser Doppler Anemometry (LDA). The measurements demonstrated two-peaked ridges velocity profile with a low velocity core at the centre within the near wake. The two-peaked ridges combined to be one-peaked ridge at 3.68 diameters downstream indicating the end of the zone of flow establishment. The study
provides useful information from a rotating ship’s propeller rather than a simplified plain jet to researchers investigating flow velocity generated from a propeller and probably resulting local scouring.
Resumo:
An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.
Resumo:
Coccidiostats are the only veterinary drugs still permitted to be used as feed additives to treat poultry for coccidiosis. To protect consumers, maximum levels for their presence in food and feed have been set by the European Union (EU). To monitor these coccidiostats, a rapid and inexpensive screening method would be a useful tool. The development of such a screening method, using a flow cytometry-based immunoassay, is described. The assay uses five sets of colour-coded paramagnetic microspheres for the detection of six selected priority coccidiostats. Different coccidiostats, with and without carrier proteins, were covalently coupled onto different bead sets and tested in combination with polyclonal antisera and with a fluorescent-labelled secondary antibody. The five optimal combinations were selected for this multiplex and a simple-to-use sample extraction method was applied for screening blank and spiked eggs and feed samples. A very good correlation (r ranging from 0.995 to 0.999) was obtained with the responses obtained in two different flow cytometers (Luminex 100 and FLEXMAP 3D). The sensitivities obtained were in accordance with the levels set by the EU as the measured limits of detection for narasin/salinomycin, lasalocid, diclazuril, nicarbazin (4,4'-dinitrocarbanilide) and monensin in eggs were 0.01, 0.1, 0.5, 53 and 0.1 µg/kg and in feed 0.1, 0.2, 0.3, 9 and 1.5 µg/kg, respectively.
Resumo:
OBJECTIVE: Laypersons are poor at emergency pulse checks (sensitivity 84%, specificity 36%). Guidelines indicate that pulse checks should not be performed. The impedance cardiogram (dZ/dt) is used to assess stroke volume. Can a novel defibrillator-based impedance cardiogram system be used to distinguish between circulatory arrest and other collapse states?
DESIGN: Animal study.
SETTING: University research laboratory.
SUBJECTS: Twenty anesthetized, mechanically ventilated pigs, weight 50-55 kg.
INTERVENTIONS: Stroke volume was altered by right ventricular pacing (160, 210, 260, and 305 beats/min). Cardiac arrest states were then induced: ventricular fibrillation (by rapid ventricular pacing) and, after successful defibrillation, pulseless electrical activity and asystole (by high-dose intravenous pentobarbitone).
MEASUREMENTS AND MAIN RESULTS: The impedance cardiogram was recorded through electrocardiogram/defibrillator pads in standard cardiac arrest positions. Simultaneously recorded electro- and impedance cardiogram (dZ/dt) along with arterial blood pressure tracings were digitized during each pacing and cardiac arrest protocol. Five-second epochs were analyzed for sinus rhythm (20 before ventricular fibrillation, 20 after successful defibrillation), ventricular fibrillation (40), pulseless electrical activity (20), and asystole (20), in two sets of ten pigs (ten training, ten validation). Standard impedance cardiogram variables were noncontributory in cardiac arrest, so the fast Fourier transform of dZ/dt was assessed. During ventricular pacing, the peak amplitude of fast Fourier transform of dZ/dt (between 1.5 and 4.5 Hz) correlated with stroke volume (r2 = .3, p < .001). In cardiac arrest, a peak amplitude of fast Fourier transform of dZ/dt of < or = 4 dB x ohm x rms indicated no output with high sensitivity (94% training set, 86% validation set) and specificity (98% training set, 90% validation set).
CONCLUSIONS: As a powerful clinical marker of circulatory collapse, the fast Fourier transformation of dZ/dt (impedance cardiogram) has the potential to improve emergency care by laypersons using automated defibrillators.
Resumo:
Summary: There are substantial variations in the way that applicants are selected
for social work programmes in the UK and across the world. This article begins by reviewing the literature in this field, revealing debates about how effective and reliable are methods of assessment used during admission processes. It then describes a crosssectional survey of new social work applicants (n¼203) to two programme providers,describing demographic characteristics and their experiences of the admissions process.
Findings: A number of themes emerged from two sets of findings. There were variations in demographic characteristics, particularly in terms of gender and religion. The study was particularly interested in how students viewed the admissions process. Most students were satisfied with admissions processes, and there were some differences in views about the methods used. The article concludes by describing changes to the admissions system that were partly informed by the study. The article acknowledges the expected bias in the methodology, given that successful applicants were surveyed
and not those who were not successful.
Applications: The authors discuss the study findings in the context of national and international literature and suggest that more rigorous attention should be paid to such evaluations to enable this important area of education and workforce development to be better understood.
Resumo:
The complete sequence of the 46,267 bp genome of the lytic bacteriophage tf specific to Pseudomonas putida PpG1 has been determined. The phage genome has two sets of convergently transcribed genes and 186 bp long direct terminal repeats. The overall genomic architecture of the tf phage is similar to that of the previously described Pseudomonas aeruginosa phages PaP3, LUZ24 and phiMR299-2, and 39 out of the 72 products of predicted tf open reading frames have orthologs in these phages. Accordingly, tf was classified as belonging to the LUZ24-like bacteriophage group. However, taking into account very low homology levels between tf DNA and that of the other phages, tf should be considered as an evolutionary divergent member of the group. Two distinguishing features not reported for other members of the group were found in the tf genome. Firstly, a unique end structure - a blunt right end and a 4-nucleotide 3'-protruding left end - was observed. Secondly, 14 single-chain interruptions (nicks) were found in the top strand of the tf DNA. All nicks were mapped within a consensus sequence 5'-TACT/RTGMC-3'. Two nicks were analyzed in detail and were shown to be present in more than 90% of the phage population. Although localized nicks were previously found only in the DNA of T5-like and phiKMV-like phages, it seems increasingly likely that this enigmatic structural feature is common to various other bacteriophages.
Resumo:
In this paper we seek to explain variations in levels of deprivation between EU countries. The starting-point of our analysis is the finding that the relationship between income and life-style deprivation varies across countries. Given our understanding of the manner in which the income-deprivation mismatch may arise from the limitations of current income as a measure of command over resources, the pattern of variation seems to be consistent with our expectations of the variable degree to which welfare-state regimes achieve 'decommodification' and smooth income flows. This line of reasoning suggests that cross-national differences in deprivation might, in significant part, be due not only to variation in household and individual characteristics that are associated with disadvantage but also to the differential impact of such variables across countries and indeed welfare regimes. To test this hypothesis, we have taken advantage of the ECHP (European Community Household Panel) comparative data set in order to pursue a strategy of substituting variable names for country/welfare regime names. We operated with two broad categories of variables, tapping, respectively, needs and resources. Although both sets of factors contribute independently to our ability to predict deprivation, it is the resource factors that are crucial in reducing country effects. The extent of cross-national heterogeneity depends on specifying the social class and situation in relation to long-term unemployment of the household reference person. The impact of the structural socio-economic variables that we label 'resource factors' varies across countries in a manner that is broadly consistent with welfare regime theory and is the key factor in explaining cross-country differences in deprivation. As a consequence, European homogeneity is a great deal more evident among the advantaged than the disadvantaged.
Resumo:
This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data. © 2012 Elsevier B.V.
Resumo:
Data obtained with any research tool must be reproducible, a concept referred to as reliability. Three techniques are often used to evaluate reliability of tools using continuous data in aging research: intraclass correlation coefficients (ICC), Pearson correlations, and paired t tests. These are often construed as equivalent when applied to reliability. This is not correct, and may lead researchers to select instruments based on statistics that may not reflect actual reliability. The purpose of this paper is to compare the reliability estimates produced by these three techniques and determine the preferable technique. A hypothetical dataset was produced to evaluate the reliability estimates obtained with ICC, Pearson correlations, and paired t tests in three different situations. For each situation two sets of 20 observations were created to simulate an intrarater or inter-rater paradigm, based on 20 participants with two observations per participant. Situations were designed to demonstrate good agreement, systematic bias, or substantial random measurement error. In the situation demonstrating good agreement, all three techniques supported the conclusion that the data were reliable. In the situation demonstrating systematic bias, the ICC and t test suggested the data were not reliable, whereas the Pearson correlation suggested high reliability despite the systematic discrepancy. In the situation representing substantial random measurement error where low reliability was expected, the ICC and Pearson coefficient accurately illustrated this. The t test suggested the data were reliable. The ICC is the preferred technique to measure reliability. Although there are some limitations associated with the use of this technique, they can be overcome.
Resumo:
Older individuals often suffer from multiple co-morbidities and are particularly vulnerable to potentially inappropriate prescribing (PIP). One method of defining instances of PIP is to use validated, evidence-based, explicit criteria. Two sets of criteria have gained international recognition: the Screening Tool of Older Persons' potentially inappropriate Prescriptions (STOPP) and Beers' criteria.
Resumo:
A finite element model of a single cell was created and used to investigate the effects of ageing on biophysical stimuli generated within a cell. Major cellular components were incorporated in the model: the membrane, cytoplasm, nucleus, microtubules, actin filaments, intermediate filaments, nuclear lamina, and chromatin. The model used multiple sets of tensegrity structures. Viscoelastic properties were assigned to the continuum components. To corroborate the model, a simulation of Atomic Force Microscopy (AFM) indentation was performed and results showed a force/indentation simulation with the range of experimental results.
Ageing was simulated by both increasing membrane stiffness (thereby modelling membrane peroxidation with age) and decreasing density of cytoskeletal elements (thereby modelling reduced actin density with age). Comparing normal and aged cells under indentation predicts that aged cells have a lower membrane area subjected to high strain compared to young cells, but the difference, surprisingly, is very small and would not be measurable experimentally. Ageing is predicted to have more significant effect on strain deep in the nucleus. These results show that computation of biophysical stimuli within cells are achievable with single-cell computational models whose force/displacement behaviour is within experimentally observed ranges. the models suggest only small, though possibly physiologically-significant, differences in internal biophysical stimuli between normal and aged cells.
Resumo:
Advances in the diagnosis and treatment of cancer has resulted in longer survival, meaning that cancer patients are now living with what may be termed a chronic type condition. As a result of this the needs of patients living with a cancer diagnosis has changed, placing a greater emphasis on survivorship which in turn has an effect on quality of life and sleep patterns. Evidence suggests that counselling and complementary therapies have a positive impact not only on the cancer patient’s quality of life but also on family members and friends.
The aim of this study was to determine if there is an improvement in client’s quality of life and sleep patterns after availing of counselling and complementary therapy services as offered by a local cancer charity.
All clients availing of the counselling or complementary therapies offered by the charity were invited to participate in a Service Evaluation. The regulations relating to research involving human participants as outlined by the “Research Governance Framework” at a local university were also adhered to. A seven piece questionnaire was used for evaluation of services.
Access to anonymous data from the cancer patients, their families and carers was granted by the Research and Development Officer within Action Cancer.
A total of 507 participants completed the initial questionnaires immediately before therapy and 255 participants completed the questionnaires immediately after therapy, the total matched sample is 230. When considering counselling and complementary therapies together (therapeutic services) there were statistically significant results indicating improved quality of life and sleep patterns between the two sets of data. However this was not the trend when considering counselling and complementary therapies alone.
While some of the findings closely reflect the literature and on the whole supports the use of therapeutic services in having a positive effect on cancer patient’s quality of life and sleep patterns.