923 resultados para mesh-free method
Resumo:
In this paper, a hybrid smoothed finite element method (H-SFEM) is developed for solid mechanics problems by combining techniques of finite element method (FEM) and Node-based smoothed finite element method (NS-FEM) using a triangular mesh. A parameter is equipped into H-SFEM, and the strain field is further assumed to be the weighted average between compatible stains from FEM and smoothed strains from NS-FEM. We prove theoretically that the strain energy obtained from the H-SFEM solution lies in between those from the compatible FEM solution and the NS-FEM solution, which guarantees the convergence of H-SFEM. Intensive numerical studies are conducted to verify these theoretical results and show that (1) the upper and lower bound solutions can always be obtained by adjusting ; (2) there exists a preferable at which the H-SFEM can produce the ultrasonic accurate solution.
Resumo:
An efficient numerical method to compute nonlinear solutions for two-dimensional steady free-surface flow over an arbitrary channel bottom topography is presented. The approach is based on a boundary integral equation technique which is similar to that of Vanden-Broeck's (1996, J. Fluid Mech., 330, 339-347). The typical approach for this problem is to prescribe the shape of the channel bottom topography, with the free-surface being provided as part of the solution. Here we take an inverse approach and prescribe the shape of the free-surface a priori while solving for the corresponding bottom topography. We show how this inverse approach is particularly useful when studying topographies that give rise to wave-free solutions, allowing us to easily classify eleven basic flow types. Finally, the inverse approach is also adapted to calculate a distribution of pressure on the free-surface, given the free-surface shape itself.
Resumo:
Based on theoretical prediction, a g-C3N4@carbon metal-free oxygen reduction reaction (ORR) electrocatalyst was designed and synthesized by uniform incorporation of g-C3N4 into a mesoporous carbon to enhance the electron transfer efficiency of g-C3N4. The resulting g-C3N4@carbon composite exhibited competitive catalytic activity (11.3 mA cm–2 kinetic-limiting current density at −0.6 V) and superior methanol tolerance compared to a commercial Pt/C catalyst. Furthermore, it demonstrated significantly higher catalytic efficiency (nearly 100% of four-electron ORR process selectivity) than a Pt/C catalyst. The proposed synthesis route is facile and low-cost, providing a feasible method for the development of highly efficient electrocatalysts.
Resumo:
High-performance liquid chromatography coupled with solid phase extraction method was developed for determination of isofraxidin in rat plasma after oral administration of Acanthopanax senticosus extract (ASE), and pharmacokinetic parameters of isofraxidin either in ASE or pure compound were measured. The HPLC analysis was performed on a Dikma Diamonsil RP(18) column (4.6 mm x 150 mm, 5 microm) with the isocratic elution of solvent A (acetonitrile) and solvent B (0.1% aqueous phosphoric acid, v/v) (A : B = 22 : 78) and the detection wavelength was set at 343 nm. The calibration curve was linear over the range of 0.156-15.625 microg/ml. The limit of detection was 60 ng/ml. The intra-day precision was 5.8%, and the inter-day precision was 6.0%. The recovery was 87.30+/-1.73%. When the dosage of ASE is equal to pure compound caculated by the amount of isofraxidin, it has been found to have two maximum concentrations in plasma while the pure compound only showed one peak in the plasma concentration-time curve. The determined content of isofraxidin in plasma after oral administration of ASE is the total contents of free isofraxidin and its precursors in ASE in vitro. The pharmacokinetic characteristics of ASE showed the priority of the extract and the properities of traditional Chinese medicine.
Resumo:
In this paper, a recently introduced model-based method for precedent-free fault detection and isolation (FDI) is modified to deal with multiple input, multiple output (MIMO) systems and is applied to an automotive engine with exhaust gas recirculation (EGR) system. Using normal behavior data generated by a high fidelity engine simulation, the growing structure multiple model system (GSMMS) approach is used to construct dynamic models of normal behavior for the EGR system and its constituent subsystems. Using the GSMMS models as a foundation, anomalous behavior is detected whenever statistically significant departures of the most recent modeling residuals away from the modeling residuals displayed during normal behavior are observed. By reconnecting the anomaly detectors (ADs) to the constituent subsystems, EGR valve, cooler, and valve controller faults are isolated without the need for prior training using data corresponding to particular faulty system behaviors.
Resumo:
A rapid electrochemical method based on using a clean hydrogen-bubble template to form a bimetallic porous honeycomb Cu/Pd structure has been investigated. The addition of palladium salt to a copper-plating bath under conditions of vigorous hydrogen evolution was found to influence the pore size and bulk concentration of copper and palladium in the honeycomb bimetallic structure. The surface was characterised by X-ray photoelectron spectroscopy, which revealed that the surface of honeycomb Cu/Pd was found to be rich with a Cu/Pd alloy. The inclusion of palladium in the bimetallic structure not only influenced the pore size, but also modified the dendritic nature of the internal wall structure of the parent copper material into small nanometre-sized crystallites. The chemical composition of the bimetallic structure and substantial morphology changes were found to significantly influence the surface-enhanced Raman spectroscopic response for immobilised rhodamine B and the hydrogen-evolution reaction. The ability to create free-standing films of this honeycomb material may also have many advantages in the areas of gas- and liquid-phase heterogeneous catalysis.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
Purpose. To establish a simple and rapid analytical method, based on direct insertion/electron ionization-mass spectrometry (DI/EI-MS), for measuring free cholesterol in tears from humans and rabbits. Methods. A stable-isotope dilution protocol employing DI/EI-MS in selected ion monitoring mode was developed and validated. It was used to quantify the free cholesterol content in human and rabbit tear extracts. Tears were collected from adult humans (n = 15) and rabbits (n = 10) and lipids extracted. Results. Screening, full-scan (m/z 40-600) DI/EI-MS analysis of crude tear extracts showed that diagnostic ions located in the mass range m/z 350 to 400 were those derived from free cholesterol, with no contribution from cholesterol esters. DI/EI-MS data acquired using selected ion monitoring (SIM) were analyzed for the abundance ratios of diagnostic ions with their stable isotope-labeled analogues arising from the D6-cholesterol internal standard. Standard curves of good linearity were produced and an on-probe limit of detection of 3 ng (at 3:1 signal to noise) and limit of quantification of 8 ng (at 10:1 signal to noise). The concentration of free cholesterol in human tears was 15 ± 6 μg/g, which was higher than in rabbit tears (10 ± 5 μg/g). Conclusions. A stable-isotope dilution DI/EI-SIM method for free cholesterol quantification without prior chromatographic separation was established. Using this method demonstrated that humans have higher free cholesterol levels in their tears than rabbits. This is in agreement with previous reports. This paper provides a rapid and reliable method to measure free cholesterol in small-volume clinical samples. © 2013 The Association for Research in Vision and Ophthalmology, Inc.
Resumo:
The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.
Resumo:
Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.
Resumo:
Background Accelerometers have become one of the most common methods of measuring physical activity (PA). Thus, validity of accelerometer data reduction approaches remains an important research area. Yet, few studies directly compare data reduction approaches and other PA measures in free-living samples. Objective To compare PA estimates provided by 3 accelerometer data reduction approaches, steps, and 2 self-reported estimates: Crouter's 2-regression model, Crouter's refined 2-regression model, the weighted cut-point method adopted in the National Health and Nutrition Examination Survey (NHANES; 2003-2004 and 2005-2006 cycles), steps, IPAQ, and 7-day PA recall. Methods A worksite sample (N = 87) completed online-surveys and wore ActiGraph GT1M accelerometers and pedometers (SW-200) during waking hours for 7 consecutive days. Daily time spent in sedentary, light, moderate, and vigorous intensity activity and percentage of participants meeting PA recommendations were calculated and compared. Results Crouter's 2-regression (161.8 +/- 52.3 minutes/day) and refined 2-regression (137.6 +/- 40.3 minutes/day) models provided significantly higher estimates of moderate and vigorous PA and proportions of those meeting PA recommendations (91% and 92%, respectively) as compared with the NHANES weighted cut-point method (39.5 +/- 20.2 minutes/day, 18%). Differences between other measures were also significant. Conclusions When comparing 3 accelerometer cut-point methods, steps, and self-report measures, estimates of PA participation vary substantially.
Resumo:
Introduction Natural product provenance is important in the food, beverage and pharmaceutical industries, for consumer confidence and with health implications. Raman spectroscopy has powerful molecular fingerprint abilities. Surface Enhanced Raman Spectroscopy’s (SERS) sharp peaks allow distinction between minimally different molecules, so it should be suitable for this purpose. Methods Naturally caffeinated beverages with Guarana extract, coffee and Red Bull energy drink as a synthetic caffeinated beverage for comparison (20 µL ea.) were reacted 1:1 with Gold nanoparticles functionalised with anti-caffeine antibody (ab15221) (10 minutes), air dried and analysed in a micro-Raman instrument. The spectral data was processed using Principle Component Analysis (PCA). Results The PCA showed Guarana sourced caffeine varied significantly from synthetic caffeine (Red Bull) on component 1 (containing 76.4% of the variance in the data). See figure 1. The coffee containing beverages, and in particular Robert Timms (instant coffee) were very similar on component 1, but the barista espresso showed minor variance on component 1. Both coffee sourced caffeine samples varied with red Bull on component 2, (20% of variance). ************************************************************ Figure 1 PCA comparing a naturally caffeinated beverage containing Guarana with coffee. ************************************************************ Discussion PCA is an unsupervised multivariate statistical method that determines patterns within data. Figure 1 shows Caffeine in Guarana is notably different to synthetic caffeine. Other researchers have revealed that caffeine in Guarana plants is complexed with tannins. Naturally sourced/ lightly processed caffeine (Monster Energy, Espresso) are more inherently different than synthetic (Red Bull) /highly processed (Robert Timms) caffeine, in figure 1, which is consistent with this finding and demonstrates this technique’s applicability. Guarana provenance is important because it is still largely hand produced and its demand is escalating with recognition of its benefits. This could be a powerful technique for Guarana provenance, and may extend to other industries where provenance / authentication are required, e.g. the wine or natural pharmaceuticals industries.
Resumo:
This thesis developed a new method for measuring extremely low amounts of organic and biological molecules, using Surface enhanced Raman Spectroscopy. This method has many potential applications, e.g. medical diagnosis, public health, food provenance, antidoping, forensics and homeland security. The method development used caffeine as the small molecule example, and erythropoietin (EPO) as the large molecule. This method is much more sensitive and specific than currently used methods; rapid, simple and cost effective. The method can be used to detect target molecules in beverages and biological fluids without the usual preparation steps.
Resumo:
Vertical graphene nanosheets (VGNS) hold great promise for high-performance supercapacitors owing to their excellent electrical transport property, large surface area and in particular, an inherent three-dimensional, open network structure. However, it remains challenging to materialise the VGNS-based supercapacitors due to their poor specific capacitance, high temperature processing, poor binding to electrode support materials, uncontrollable microstructure, and non-cost effective way of fabrication. Here we use a single-step, fast, scalable, and environmentally-benign plasma-enabled method to fabricate VGNS using cheap and spreadable natural fatty precursor butter, and demonstrate the controllability over the degree of graphitization and the density of VGNS edge planes. Our VGNS employed as binder-free supercapacitor electrodes exhibit high specific capacitance up to 230 F g−1 at a scan rate of 10 mV s−1 and >99% capacitance retention after 1,500 charge-discharge cycles at a high current density, when the optimum combination of graphitic structure and edge plane effects is utilised. The energy storage performance can be further enhanced by forming stable hybrid MnO2/VGNS nano-architectures which synergistically combine the advantages from both VGNS and MnO2. This deterministic and plasma-unique way of fabricating VGNS may open a new avenue for producing functional nanomaterials for advanced energy storage devices.