17 resultados para Direct timeintegration methods
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Free-roaming dogs (FRD) represent a potential threat to the quality of life in cities from an ecological, social and public health point of view. One of the most urgent concerns is the role of uncontrolled dogs as reservoirs of infectious diseases transmittable to humans and, above all, rabies. An estimate of the FRD population size and characteristics in a given area is the first step for any relevant intervention programme. Direct count methods are still prominent because of their non-invasive approach, information technologies can support such methods facilitating data collection and allowing for a more efficient data handling. This paper presents a new framework for data collection using a topological algorithm implemented as ArcScript in ESRI® ArcGIS software, which allows for a random selection of the sampling areas. It also supplies a mobile phone application for Android® operating system devices which integrates Global Positioning System (GPS) and Google Maps™. The potential of such a framework was tested in 2 Italian regions. Coupling technological and innovative solutions associated with common counting methods facilitate data collection and transcription. It also paves the way to future applications, which could support dog population management systems.
Resumo:
This paper describes the development of a two-dimensional transient catalyst model. Although designed primarily for two-stroke direct injection engines, the model is also applicable to four-stroke lean burn and diesel applications. The first section describes the geometries, properties and chemical processes simulated by the model and discusses the limitations and assumptions applied. A review of the modeling techniques adopted by other researchers is also included. The mathematical relationships which are used to represent the system are then described, together with the finite volume method used in the computer program. The need for a two-dimensional approach is explained and the methods used to model effects such as flow and temperature distribution are presented. The problems associated with developing surface reaction rates are discussed in detail and compared with published research. Validation and calibration of the model is achieved by comparing predictions with measurements from a flow reactor. While an extensive validation process, involving detailed measurements of gas composition and thermal gradients, has been completed, the analysis is too detailed for publication here and is the subject of a separate technical paper.
Resumo:
Two major signaling pathways, those triggered by estrogen (E(2)) and by the Wnt family, interact in the breast to cause growth and differentiation. The estrogen receptors ER(alpha) and ER(beta) are activated by binding E(2) and act as ligand-dependent transcription factors. The effector for the Wnt family is the Tcf family of transcription factors. Both sets of transcription factors recognize discrete but different nucleotide sequences in the promoters of their target genes. By using transient transfections of reporter constructs for the osteopontin and thymidine kinase promoters in rat mammary cells, we show that Tcf-4 antagonizes and Tcf-1 stimulates the effects of activated ER/E(2). For mutants of the former promoter, the stimulatory effects of ER(alpha)/E(2) can be made to be dependent on Tcf-1, and for the latter promoter the effects of the T cell factors (TCFs) are dependent on ER/E(2). Direct interaction between ERs and Tcfs either at the Tcf/ER(alpha)-binding site on the DNA or in the absence of DNA is established by gel retardation assays or by coimmunoprecipitation/biosensor methods, respectively. These results show that the two sets of transcription factors can interact directly, the interaction between ERs and Tcf-4 being antagonistic and that between ERs and Tcf-1 being synergistic on the activity of the promoters employed. Since Tcf-4 is the major Tcf family member in the breast, it is suggested that the antagonistic interaction is normally dominant in vivo in this tissue.
Resumo:
Reported mast-cell counts in endobronchial biopsies from asthmatic subjects are conflicting, with different methodologies often being used. This study compared three standard methods of counting mast cells in endobronchial biopsies from asthmatic and normal subjects. Endobronchial biopsies were obtained from atopic asthmatic subjects (n=17), atopic nonasthmatic subjects (n=6), and nonatopic nonasthmatic control subjects (n=5). After overnight fixation in Carnoy's fixative, mast cells were stained by the short and long toluidine blue methods and antitryptase immunohistochemistry and were counted by light microscopy. Method comparison was made according to Bland & Altman. The limits of agreement were unacceptable for each of the comparisons, suggesting that the methods are not interchangeable. Coefficients of repeatability were excellent, and not different for the individual techniques. These results suggest that some of the reported differences in mast-cell numbers in endobronchial biopsies in asthma may be due to the staining method used, making direct comparisons between studies invalid. Agreement on a standard method is required for counting mast cells in bronchial biopsies, and we recommend the immunohistochemical method, since fixation is less critical and the resultant tissue sections facilitate clear, accurate, and rapid counts.
Resumo:
Computionally efficient sequential learning algorithms are developed for direct-link resource-allocating networks (DRANs). These are achieved by decomposing existing recursive training algorithms on a layer by layer and neuron by neuron basis. This allows network weights to be updated in an efficient parallel manner and facilitates the implementation of minimal update extensions that yield a significant reduction in computation load per iteration compared to existing sequential learning methods employed in resource-allocation network (RAN) and minimal RAN (MRAN) approaches. The new algorithms, which also incorporate a pruning strategy to control network growth, are evaluated on three different system identification benchmark problems and shown to outperform existing methods both in terms of training error convergence and computational efficiency. (c) 2005 Elsevier B.V. All rights reserved.
Claudin-1 Has Tumor Suppressive Activity and Is a Direct Target of RUNX3 in Gastric Epithelial Cells
Resumo:
BACKGROUND & AIMS: The transcription Factor RUNX3 is a gastric tumor suppressor. Tumorigenic Runx3(-/-) gastric epithelial cells attach weakly to each other, compared with nontumorigenic Runx3(+/+) cells. We alined to identify RUNX3 target genes that promote cell-cell contact to Improve our understanding of RUNX3's role in Suppressing gastric carcinogenesis. METHODS: We compared gene expression profiles of Runx3(+/+) and Runx3(-/-) cells and observed down-regulation of genes associated with cell-cell adhesion in Runx3(-/-) cells. Reporter, mobility shift, and chromatin immunoprecipitation assays were used to examine the regulation of these genes by RUNX3. Tumorigenesis assays and immunohistologic, analyses of human gastric tumors were performed to confirm the role of the candidate genes ill gastric tumor development. RESULTS: Mobility shift and chromatin immunoprecipitation assays revealed that the promoter activity of the gene that encodes the tight Junction protein claudin-1 was up-regulated via the binding of RUNX3 to the RUNX consensus sites. The tumorigenicity of gastric epithelial cells From Runx3(-/-) mice was significantly reduced by restoration of claudin-1 expression, whereas knockdown of claudin-1. increased the tumorigenicity of human gastric cancer cells. Concomitant expression of RUNX3 and claudin-1 was observed in human normal gastric epithelium and cancers. CONCLUSIONS: The tight junction protein claudin-1 has gastric tumor suppressive activity and is a direct transcriptional target of RUNX3. Claudin-1 is down-regulated during the epithelial-mesenchymal transition; RUNX3 might therefore act as a tumor suppressor to antagonize the epithelial-mesenchymal transition.
Resumo:
Electrodeposition of metals onto conductive supports such as graphite potentially provides a lower-waste method to form heterogeneous catalysts than the standard methods such as wet impregnation. Copper electrodeposition onto pressed graphite disc electrodes was investigated from aqueous CuSO4-ethylenediamine solutions by chronoamperometry with scanning electron microscopy used to ascertain the particle sizes obtained by this method. The particle size was studied as a function of pH, CuSO4-ethylenediamine concentration, and electrodeposition time. It was observed that decreasing the pH, copper-ethylenediamine concentration and time each decreased the size of the copper particles observed, with the smallest obtained being around 5-20 nm. Furthermore, electroless aerobic oxidation of copper metal in the presence of ethylenediamine was successfully coupled with the electrodeposition in the same vessel. In this way, deposition was achieved sequentially on up to twenty different graphite discs using the same ethylenediamine solution, demonstrating the recyclability of the ligand. The materials thus prepared were shown to be catalytically active for the mineralisation of phenol by hydrogen peroxide. Overall, the results provide a proof-of-principle that by making use of aerobic oxidation coupled with electrochemical deposition, elemental base metals can be used directly as starting materials to form heterogeneous catalysts without the need to use metal salts as catalyst precursors.
Resumo:
In many coastal areas of North America and Scandinavia, post-glacial clay sediments have emerged above sea level due to iso-static uplift. These clays are often destabilised by fresh water leaching and transformed to so-called quick clays as at the investigated area at Smørgrav, Norway. Slight mechanical disturbances of these materials may trigger landslides. Since the leaching increases the electrical resistivity of quick clay as compared to normal marine clay, the application of electromagnetic (EM) methods is of particular interest in the study of quick clay structures.
For the first time, single and joint inversions of direct-current resistivity (DCR), radiomagnetotelluric (RMT) and controlled-source audiomagnetotelluric (CSAMT) data were applied to delineate a zone of quick clay. The resulting 2-D models of electrical resistivity correlate excellently with previously published data from a ground conductivity metre and resistivity logs from two resistivity cone penetration tests (RCPT) into marine clay and quick clay. The RCPT log into the central part of the quick clay identifies the electrical resistivity of the quick clay structure to lie between 10 and 80 O m. In combination with the 2-D inversion models, it becomes possible to delineate the vertical and horizontal extent of the quick clay zone. As compared to the inversions of single data sets, the joint inversion model exhibits sharper resistivity contrasts and its resistivity values are more characteristic of the expected geology. In our preferred joint inversion model, there is a clear demarcation between dry soil, marine clay, quick clay and bedrock, which consists of alum shale and limestone.
Resumo:
Background: Resource utilisation and direct costs associated with glaucoma progression in Europe are unknown. As population progressively ages, the economic impact of the disease will increase. Methods: From a total of 1655 consecutive cases, the records of 194 patients were selected and stratified by disease severity. Record selection was based on diagnoses of primary open angle glaucoma, glaucoma suspect, ocular hypertension, or normal tension glaucoma; 5 years minimum follow up were required. Glaucoma severity was assessed using a six stage glaucoma staging system based on static threshold visual field parameters. Resource utilisation data were abstracted from the charts and unit costs were applied to estimate direct costs to the payer. Resource utilisation and estimated direct cost of treatment, per person year, were calculated. Results: A statistically significant increasing linear trend (p = 0.018) in direct cost as disease severity worsened was demonstrated. The direct cost of treatment increased by an estimated €86 for each incremental step ranging from €455 per person year for stage 0 to €969 per person year for stage 4 disease. Medication costs ranged from 42% to 56% of total direct cost for all stages of disease. Conclusions: These results demonstrate for the first time in Europe that resource utilisation and direct medical costs of glaucoma management increase with worsening disease severity. Based on these findings, managing glaucoma and effectively delaying disease progression would be expected to significantly reduce the economic burden of this disease. These data are relevant to general practitioners and healthcare administrators who have a direct influence on the distribution of resources.
Resumo:
High-dimensional gene expression data provide a rich source of information because they capture the expression level of genes in dynamic states that reflect the biological functioning of a cell. For this reason, such data are suitable to reveal systems related properties inside a cell, e.g., in order to elucidate molecular mechanisms of complex diseases like breast or prostate cancer. However, this is not only strongly dependent on the sample size and the correlation structure of a data set, but also on the statistical hypotheses tested. Many different approaches have been developed over the years to analyze gene expression data to (I) identify changes in single genes, (II) identify changes in gene sets or pathways, and (III) identify changes in the correlation structure in pathways. In this paper, we review statistical methods for all three types of approaches, including subtypes, in the context of cancer data and provide links to software implementations and tools and address also the general problem of multiple hypotheses testing. Further, we provide recommendations for the selection of such analysis methods.
Resumo:
Background: Successful periodontal treatment requires a commitment to regular lifelong maintenance and may be perceived by patients to be costly. This study calculates the total lifetime cost of periodontal treatment in the setting of a specialist periodontal practice and investigates the cost implications of choosing not to proceed with such treatment. Methods: Data from patients treated in a specialist practice in Norway were used to calculate the total lifetime cost of periodontal treatment that included baseline periodontal treatment, regular maintenance, retreatment, and replacing teeth lost during maintenance. Incremental costs for alternative strategies based on opting to forego periodontal treatment or maintenance and to replace any teeth lost with either bridgework or implants were calculated. Results: Patients who completed baseline periodontal treatment but did not have any additional maintenance or retreatment could replace only three teeth with bridgework or two teeth with implants before the cost of replacing additional teeth would exceed the cost of lifetime periodontal treatment. Patients who did not have any periodontal treatment could replace ≤4 teeth with bridgework or implants before a replacement strategy became more expensive. Conclusions: Within the limits of the assumptions made, periodontal treatment in a Norwegian specialist periodontal practice is cost-effective when compared to an approach that relies on opting to replace teeth lost as a result of progressive periodontitis with fixed restorations. In particular, patients who have initial comprehensive periodontal treatment but do not subsequently comply with maintenance could, on average, replace ≤3 teeth with bridgework or two teeth with implants before this approach would exceed the direct cost of lifetime periodontal treatment in the setting of the specialist practice studied. © 2012 American Academy of Periodontology.
Resumo:
The construction industry in Northern Ireland is one of the major contributors of construction waste to landfill each year. The aim of this research paper is to identify the core on-site management causes of material waste on construction sites in Northern Ireland and to illustrate various methods of prevention which can be adopted. The research begins with a detailed literature review and is complemented with the conduction of semi-structured interviews with 6 professionals who are experienced and active within the Northern Ireland construction industry. Following on from the literature review and interviews analysis, a questionnaire survey is developed to obtain further information in relation to the subject area. The questionnaire is based on the key findings of the previous stages to direct the research towards the most influential factors. The analysis of the survey responses reveals that the core causes of waste generation include a rushed program, poor handling and on-site damage of materials, while the principal methods of prevention emerge as the adequate storage, the reuse of material on-site and efficient material ordering. Furthermore, the role of the professional background in the shaping of perceptions relevant to waste management is also investigated and significant differences are identified. The findings of this research are beneficial for the industry as they enhance the understanding of construction waste generation causes and highlight the practices required to reduce waste on-site in the context of sustainable development.
Adaptive backstepping droop controller design for multi-terminal high-voltage direct current systems
Resumo:
Wind power is one of the most developed renewable energy resources worldwide. To integrate offshore wind farms to onshore grids, the high-voltage direct current (HVDC) transmission cables interfaced with voltage source converters (VSCs) are considered to be a better solution than conventional approaches. Proper DC voltage indicates successive power transfer. To connect more than one onshore grid, the DC voltage droop control is one of the most popular methods to share the control burden between different terminals. However, the challenges are that small droop gains will cause voltage deviations, while higher droop gain settings will cause large oscillations. This study aims to enhance the performance of the traditional droop controller by considering the DC cable dynamics. Based on the backstepping control concept, DC cables are modelled with a series of capacitors and inductors. The final droop control law is deduced step-by-step from the original remote side. At each step the control error from the previous step is considered. Simulation results show that both the voltage deviations and oscillations can be effectively reduced using the proposed method. Further, power sharing between different terminals can be effectively simplified such that it correlates linearly with the droop gains, thus enabling simple yet accurate system operation and control.