83 resultados para level set method
Resumo:
A flexible, mass-conservative numerical technique for solving the advection-dispersion equation for miscible contaminant transport is presented. The method combines features of puff transport models from air pollution studies with features from the random walk particle method used in water resources studies, providing a deterministic time-marching algorithm which is independent of the grid Peclet number and scales from one to higher dimensions simply. The concentration field is discretised into a number of particles, each of which is treated as a point release which advects and disperses over the time interval. The dispersed puff is itself discretised into a spatial distribution of particles whose masses can be pre-calculated. Concentration within the simulation domain is then calculated from the mass distribution as an average over some small volume. Comparison with analytical solutions for a one-dimensional fixed-duration concentration pulse and for two-dimensional transport in an axisymmetric flow field indicate that the algorithm performs well. For a given level of accuracy the new method has lower computation times than the random walk particle method.
Resumo:
Extending the work presented in Prasad et al. (IEEE Proceedings on Control Theory and Applications, 147, 523-37, 2000), this paper reports a hierarchical nonlinear physical model-based control strategy to account for the problems arising due to complex dynamics of drum level and governor valve, and demonstrates its effectiveness in plant-wide disturbance handling. The strategy incorporates a two-level control structure consisting of lower-level conventional PI regulators and a higher-level nonlinear physical model predictive controller (NPMPC) for mainly set-point manoeuvring. The lower-level PI loops help stabilise the unstable drum-boiler dynamics and allow faster governor valve action for power and grid-frequency regulation. The higher-level NPMPC provides an optimal load demand (or set-point) transition by effective handling of plant-wide interactions and system disturbances. The strategy has been tested in a simulation of a 200-MW oil-fired power plant at Ballylumford in Northern Ireland. A novel approach is devized to test the disturbance rejection capability in severe operating conditions. Low frequency disturbances were created by making random changes in radiation heat flow on the boiler-side, while condenser vacuum was fluctuating in a random fashion on the turbine side. In order to simulate high-frequency disturbances, pulse-type load disturbances were made to strike at instants which are not an integral multiple of the NPMPC sampling period. Impressive results have been obtained during both types of system disturbances and extremely high rates of load changes, right across the operating range, These results compared favourably with those from a conventional state-space generalized predictive control (GPC) method designed under similar conditions.
Resumo:
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.
Resumo:
Background
Interaction of a drug or chemical with a biological system can result in a gene-expression profile or signature characteristic of the event. Using a suitably robust algorithm these signatures can potentially be used to connect molecules with similar pharmacological or toxicological properties by gene expression profile. Lamb et al first proposed the Connectivity Map [Lamb et al (2006), Science 313, 1929–1935] to make successful connections among small molecules, genes, and diseases using genomic signatures.
Results
Here we have built on the principles of the Connectivity Map to present a simpler and more robust method for the construction of reference gene-expression profiles and for the connection scoring scheme, which importantly allows the valuation of statistical significance of all the connections observed. We tested the new method with two randomly generated gene signatures and three experimentally derived gene signatures (for HDAC inhibitors, estrogens, and immunosuppressive drugs, respectively). Our testing with this method indicates that it achieves a higher level of specificity and sensitivity and so advances the original method.
Conclusion
The method presented here not only offers more principled statistical procedures for testing connections, but more importantly it provides effective safeguard against false connections at the same time achieving increased sensitivity. With its robust performance, the method has potential use in the drug development pipeline for the early recognition of pharmacological and toxicological properties in chemicals and new drug candidates, and also more broadly in other 'omics sciences.
Resumo:
A rapid liquid chromatographic-tandem mass spectrometric (LC-MS/MS) multi-residue method for the simultaneous quantitation and identification of sixteen synthetic growth promoters and bisphenol A in bovine milk has been developed and validated. Sample preparation was straightforward, efficient and economically advantageous. Milk was extracted with acetonitrile followed by phase separation with NaCl. After centrifugation, the extract was purified by dispersive solid-phase extraction with C18 sorbent material. The compounds were analysed by reversed-phase LC-MS/MS using both positive and negative ionization and operated in multiple reaction monitoring (MRM) mode, acquiring two diagnostic product ions from each of the chosen precursor ions for unambiguous confirmation. Total chromatographic run time was less than 10 min for each sample. The method was validated at a level of 1 mu g L-1. A wide variety of deuterated internal standards were used to improve method performance. The accuracy and precision of the method were satisfactory for all analytes. The confirmative quantitative liquid chromatographic tandem mass spectrometric (LC-MS/MS) method was validated according to Commission Decision 2002/657/EC. The decision limit (CC alpha) and the detection capability (CC beta) were found to be below the chosen validation level of 1 mu g L-1 for all compounds. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The article investigates the relationships between technological regimes and firm-level productivity performance, and it explores how such a relationship differs in different Schumpeterian patterns of innovation. The analysis makes use of a rich dataset containing data on innovation and other economic characteristics of a large representative sample of Norwegian firms in manufacturing and service industries for the period 1998–2004. First, we decompose TFP growth into technical progress and efficiency changes by means of data envelopment analysis. We then estimate an empirical model that relates these two productivity components to the characteristics of technological regimes and a set of other firm-specific factors. The results indicate that: (i) TFP growth has mainly been achieved through technical progress, while technical efficiency has on average decreased; (ii) the characteristics of technological regimes are important determinants of firm-level productivity growth, but their impacts on technical progress are different from the effects on efficiency change; (iii) the estimated model works differently in the two Schumpeterian regimes. Technical progress has been more dynamic in Schumpeter Mark II industries, while efficiency change has been more important in Schumpeter Mark I markets.
Resumo:
This paper presents a feature selection method for data classification, which combines a model-based variable selection technique and a fast two-stage subset selection algorithm. The relationship between a specified (and complete) set of candidate features and the class label is modelled using a non-linear full regression model which is linear-in-the-parameters. The performance of a sub-model measured by the sum of the squared-errors (SSE) is used to score the informativeness of the subset of features involved in the sub-model. The two-stage subset selection algorithm approaches a solution sub-model with the SSE being locally minimized. The features involved in the solution sub-model are selected as inputs to support vector machines (SVMs) for classification. The memory requirement of this algorithm is independent of the number of training patterns. This property makes this method suitable for applications executed in mobile devices where physical RAM memory is very limited. An application was developed for activity recognition, which implements the proposed feature selection algorithm and an SVM training procedure. Experiments are carried out with the application running on a PDA for human activity recognition using accelerometer data. A comparison with an information gain based feature selection method demonstrates the effectiveness and efficiency of the proposed algorithm.
Resumo:
As a potential alternative to CMOS technology, QCA provides an interesting paradigm in both communication and computation. However, QCAs unique four-phase clocking scheme and timing constraints present serious timing issues for interconnection and feedback. In this work, a cut-set retiming design procedure is proposed to resolve these QCA timing issues. The proposed design procedure can accommodate QCAs unique characteristics by performing delay-transfer and time-scaling to reallocate the existing delays so as to achieve efficient clocking zone assignment. Cut-set retiming makes it possible to effectively design relatively complex QCA circuits that include feedback. It utilizes the similar characteristics of synchronization, deep pipelines and local interconnections common to both QCA and systolic architectures. As a case study, a systolic Montgomery modular multiplier is designed to illustrate the procedure. Furthermore, a nonsystolic architecture, an S27 benchmark circuit, is designed and compared with previous designs. The comparison shows that the cut-set retiming method achieves a more efficient design, with a reduction of 22%, 44%, and 46% in terms of cell count, area, and latency, respectively.
Resumo:
We describe a new ab initio method for solving the time-dependent Schrödinger equation for multi-electron atomic systems exposed to intense short-pulse laser light. We call the method the R-matrix with time-dependence (RMT) method. Our starting point is a finite-difference numerical integrator (HELIUM), which has proved successful at describing few-electron atoms and atomic ions in strong laser fields with high accuracy. By exploiting the R-matrix division-of-space concept, we bring together a numerical method most appropriate to the multi-electron finite inner region (R-matrix basis set) and a different numerical method most appropriate to the one-electron outer region (finite difference). In order to exploit massively parallel supercomputers efficiently, we time-propagate the wavefunction in both regions by employing Arnoldi methods, originally developed for HELIUM.
Resumo:
Background: Tissue MicroArrays (TMAs) represent a potential high-throughput platform for the analysis and discovery of tissue biomarkers. As TMA slides are produced manually and subject to processing and sectioning artefacts, the layout of TMA cores on the final slide and subsequent digital scan (TMA digital slide) is often disturbed making it difficult to associate cores with their original position in the planned TMA map. Additionally, the individual cores can be greatly altered and contain numerous irregularities such as missing cores, grid rotation and stretching. These factors demand the development of a robust method for de-arraying TMAs which identifies each TMA core, and assigns them to their appropriate coordinates on the constructed TMA slide.
Methodology: This study presents a robust TMA de-arraying method consisting of three functional phases: TMA core segmentation, gridding and mapping. The segmentation of TMA cores uses a set of morphological operations to identify each TMA core. Gridding then utilises a Delaunay Triangulation based method to find the row and column indices of each TMA core. Finally, mapping correlates each TMA core from a high resolution TMA whole slide image with its name within a TMAMap.
Conclusion: This study describes a genuine robust TMA de-arraying algorithm for the rapid identification of TMA cores from digital slides. The result of this de-arraying algorithm allows the easy partition of each TMA core for further processing. Based on a test group of 19 TMA slides (3129 cores), 99.84% of cores were segmented successfully, 99.81% of cores were gridded correctly and 99.96% of cores were mapped with their correct names via TMAMaps. The gridding of TMA cores were also extensively tested using a set of 113 pseudo slide (13,536 cores) with a variety of irregular grid layouts including missing cores, rotation and stretching. 100% of the cores were gridded correctly.
Resumo:
Background: A suite of 10 online virtual patients developed using the IVIMEDS ‘Riverside’ authoring tool has been introduced into our undergraduate general practice clerkship. These cases provide a multimedia-rich experience to students. Their interactive nature promotes the development of clinical reasoning skills such as discriminating key clinical features, integrating information from a variety of sources and forming diagnoses and management plans.
Aims: To evaluate the usefulness and usability of a set of online virtual patients in an undergraduate general practice clerkship.
Method: Online questionnaire completed by students after their general practice placement incorporating the System Usability Scale questionnaire.
Results: There was a 57% response rate. Ninety-five per cent of students agreed that the online package was a useful learning tool and ranked virtual patients third out of six learning modalities. Questions and answers and the use of images and videos were all rated highly by students as useful learning methods. The package was perceived to have a high level of usability among respondents.
Conclusion: Feedback from students suggest that this implementation of virtual patients, set in primary care, is user friendly and rated as a valuable adjunct to their learning. The cost of production of such learning resources demands close attention to design.
Resumo:
Motivation: We study a stochastic method for approximating the set of local minima in partial RNA folding landscapes associated with a bounded-distance neighbourhood of folding conformations. The conformations are limited to RNA secondary structures without pseudoknots. The method aims at exploring partial energy landscapes pL induced by folding simulations and their underlying neighbourhood relations. It combines an approximation of the number of local optima devised by Garnier and Kallel (2002) with a run-time estimation for identifying sets of local optima established by Reeves and Eremeev (2004).
Results: The method is tested on nine sequences of length between 50 nt and 400 nt, which allows us to compare the results with data generated by RNAsubopt and subsequent barrier tree calculations. On the nine sequences, the method captures on average 92% of local minima with settings designed for a target of 95%. The run-time of the heuristic can be estimated by O(n2D?ln?), where n is the sequence length, ? is the number of local minima in the partial landscape pL under consideration and D is the maximum number of steepest descent steps in attraction basins associated with pL.
Resumo:
A surface plasmon resonance (SPR) optical biosensor method was developed for the detection of paralytic shellfish poisoning (PSP) toxins in shellfish. This application was transferred in the form of a prototype kit to seven laboratories using Biacore QSPR optical biosensor instrumentation for interlaboratory evaluation. Each laboratory received 20 shellfish samples across a range of species including blind duplicates for analysis. The samples consisted of 4 noncontaminated samples spiked in duplicate with a low level of PSP toxins (240 mu g STXcliHCl equivalents/kg), a high level of saxitoxin (825 mu g STXdiHCl/kg), 2 noncontarninated, and 14 naturally contaminated samples. All 7 participating laboratories completed the study, and HorRat values obtained were
Resumo:
A method for the hydrothermal synthesis of a single layer of zeolite Beta crystals on a molybdenum substrate for microreactor applications has been developed. Before the hydrothermal synthesis, the surface of the substrate was modified by an etching procedure that increases the roughness at the nanoscale level without completely eliminating the surface lay structure. Then, thin films of Al2O3 (170 nm) and TiO2 (50 nm) were successively deposited by atomic layer deposition (ALD) on the substrate. The internal Al2O3 film protects the Mo substrate from oxidation up to 550 degrees C in an oxidative environment. The high wettability of the external TiO2 film after UV irradiation increases zeolite nucleation on its surface. The role of the metal precursor (TiCl4 vs TiI4), deposition temperature (300 vs 500 degrees C), and film thickness (50 vs 100 nm) was investigated to obtain titania films with the slowest decay in the superhydrophilic behavior after UV irradiation. Zeolite Beta coatings with a Si/Al ratio of 23 were grown at 140 degrees C for 48 It. After ion exchange with a 10(-4) M cobalt acetate solution, the activity of the coatings was determined in the ammoxidation of ethylene to acetonitrile in a microstructured reactor. A maximum reaction rate of 220 mu mol C2H3N g(-1) s(-1) was obtained at 500 degrees C, with 42% carbon selectivity to acetonitrile. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
An indicator ink based on the redox dye 2,6-dichloroindophenol ( DCIP) is described, which allows the rapid assessment of the activity of thin, commercial photocatalytic films, such as Activ. The ink works via a photoreductive mechanism, DCIP being reduced to dihydro-DCIP within ca. 7.5 minutes exposure to UVA irradiation of moderate intensity ( ca. 4.8mW cm(-2)). The kinetics of photoreduction are found to be independent of the level of dye present in the ink formulation, but are highly sensitive to the level of glycerol. This latter observation may be associated with a solvatochromic effect, whereby the microenvironment in which the dye finds itself and, as a consequence, its reactivity is altered significantly by small changes in the glycerol content. The kinetics of photoreduction also appear linearly dependent on the UVA light intensity with an observed quantum efficiency of ca. 1.8 x 10(-3). Copyright (C) 2008.