285 resultados para Computationally efficient
Resumo:
Efficient yet inexpensive electrocatalysts for oxygen reduction reaction (ORR) are an essential component of renewable energy devices, such as fuel cells and metal-air batteries. We herein interleaved novel Co3O4 nanosheets with graphene to develop a first ever sheet-on-sheet heterostructured electrocatalyst for ORR, whose electrocatalytic activity outperformed the state-of-the-art commercial Pt/C with exceptional durability in alkaline solution. The composite demonstrates the highest activity of all the nonprecious metal electrocatalysts, such as those derived from Co3O4 nanoparticle/nitrogen-doped graphene hybrids and carbon nanotube/nanoparticle composites. Density functional theory (DFT) calculations indicated that the outstanding performance originated from the significant charge transfer from graphene to Co3O4 nanosheets promoting the electron transport through the whole structure. Theoretical calculations revealed that the enhanced stability can be ascribed to the strong interaction generated between both types of sheets.
Resumo:
Purified proteins are mandatory for molecular, immunological and cellular studies. However, purification of proteins from complex mixtures requires specialised chromatography methods (i.e., gel filtration, ion exchange, etc.) using fast protein liquid chromatography (FPLC) or high-performance liquid chromatography (HPLC) systems. Such systems are expensive and certain proteins require two or more different steps for sufficient purity and generally result in low recovery. The aim of this study was to develop a rapid, inexpensive and efficient gel-electrophoresis-based protein purification method using basic and readily available laboratory equipment. We have used crude rye grass pollen extract to purify the major allergens Lol p 1 and Lol p 5 as the model protein candidates. Total proteins were resolved on large primary gel and Coomassie Brilliant Blue (CBB)-stained Lol p 1/5 allergens were excised and purified on a secondary "mini"-gel. Purified proteins were extracted from unstained separating gels and subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and immunoblot analyses. Silver-stained SDS-PAGE gels resolved pure proteins (i.e., 875 μg of Lol p 1 recovered from a 8 mg crude starting material) while immunoblot analysis confirmed immunological reactivity of the purified proteins. Such a purification method is rapid, inexpensive, and efficient in generating proteins of sufficient purity for use in monoclonal antibody (mAb) production, protein sequencing and general molecular, immunological, and cellular studies.
Resumo:
Background Genetic testing is recommended when the probability of a disease-associated germline mutation exceeds 10%. Germline mutations are found in approximately 25% of individuals with phaeochromcytoma (PCC) or paraganglioma (PGL); however, genetic heterogeneity for PCC/PGL means many genes may require sequencing. A phenotype-directed iterative approach may limit costs but may also delay diagnosis, and will not detect mutations in genes not previously associated with PCC/PGL. Objective To assess whether whole exome sequencing (WES) was efficient and sensitive for mutation detection in PCC/PGL. Methods Whole exome sequencing was performed on blinded samples from eleven individuals with PCC/PGL and known mutations. Illumina TruSeq™ (Illumina Inc, San Diego, CA, USA) was used for exome capture of seven samples, and NimbleGen SeqCap EZ v3.0 (Roche NimbleGen Inc, Basel, Switzerland) for five samples (one sample was repeated). Massive parallel sequencing was performed on multiplexed samples. Sequencing data were called using Genome Analysis Toolkit and annotated using annovar. Data were assessed for coding variants in RET, NF1, VHL, SDHD, SDHB, SDHC, SDHA, SDHAF2, KIF1B, TMEM127, EGLN1 and MAX. Target capture of five exome capture platforms was compared. Results Six of seven mutations were detected using Illumina TruSeq™ exome capture. All five mutations were detected using NimbleGen SeqCap EZ v3.0 platform, including the mutation missed using Illumina TruSeq™ capture. Target capture for exons in known PCC/PGL genes differs substantially between platforms. Exome sequencing was inexpensive (<$A800 per sample for reagents) and rapid (results <5 weeks from sample reception). Conclusion Whole exome sequencing is sensitive, rapid and efficient for detection of PCC/PGL germline mutations. However, capture platform selection is critical to maximize sensitivity.
Resumo:
Copper is a low-cost plasmonic metal. Efficient photocatalysts of copper nanoparticles on graphene support are successfully developed for controllably catalyzing the coupling reactions of aromatic nitro compounds to the corresponding azoxy or azo compounds under visible-light irradiation. The coupling of nitrobenzene produces azoxybenzene with a yield of 90 % at 60 °C, but azobenzene with a yield of 96 % at 90 °C. When irradiated with natural sunlight (mean light intensity of 0.044 W cm−2) at about 35 °C, 70 % of the nitrobenzene is converted and 57 % of the product is azobenzene. The electrons of the copper nanoparticles gain the energy of the incident light through a localized surface plasmon resonance effect and photoexcitation of the bound electrons. The excited energetic electrons at the surface of the copper nanoparticles facilitate the cleavage of the NO bonds in the aromatic nitro compounds. Hence, the catalyzed coupling reaction can proceed under light irradiation and moderate conditions. This study provides a green photocatalytic route for the production of azo compounds and highlights a potential application for graphene.
Resumo:
The “third-generation” 3D graphene structures, T-junction graphene micro-wells (T-GMWs) are produced on cheap polycrystalline Cu foils in a single-step, low-temperature (270 °C), energy-efficient, and environment-friendly dry plasma-enabled process. T-GMWs comprise vertical graphene (VG) petal-like sheets that seemlessly integrate with each other and the underlying horizontal graphene sheets by forming T-junctions. The microwells have the pico-to-femto-liter storage capacity and precipitate compartmentalized PBS crystals. The T-GMW films are transferred from the Cu substrates, without damage to the both, in de-ionized or tap water, at room temperature, and without commonly used sacrificial materials or hazardous chemicals. The Cu substrates are then re-used to produce similar-quality T-GMWs after a simple plasma conditioning. The isolated T-GMW films are transferred to diverse substrates and devices and show remarkable recovery of their electrical, optical, and hazardous NO2 gas sensing properties upon repeated bending (down to 1 mm radius) and release of flexible trasparent display plastic substrates. The plasma-enabled mechanism of T-GMW isolation in water is proposed and supported by the Cu plasma surface modification analysis. Our GMWs are suitable for various optoelectronic, sesning, energy, and biomedical applications while the growth approach is potentially scalable for future pilot-scale industrial production.
Resumo:
We report herein highly efficient photocatalysts comprising supported nanoparticles (NPs) of gold (Au) and palladium (Pd) alloys, which utilize visible light to catalyse the Suzuki cross-coupling reactions at ambient temperature. The alloy NPs strongly absorb visible light, energizing the conduction electrons of NPs which produce highly energetic electrons at the surface sites. The surface of the energized NPs activates the substrates and these particles exhibit good activity on a range of typical Suzuki reaction combinations. The photocatalytic efficiencies strongly depend on the Au:Pd ratio of the alloy NPs, irradiation light intensity and wavelength. The results show that the alloy nanoparticles efficiently couple thermal and photonic energy sources to drive Suzuki reactions. Results of the density functional theory (DFT) calculations indicate that transfer of the light-excited electrons from the nanoparticle surface to the reactant molecules adsorbed on the nanoparticle surface activates the reactants. The knowledge acquired in this study may inspire further studies of new efficient photocatalysts and a wide range of organic syntheses driven by sunlight.
Resumo:
Bird species richness survey is one of the most intriguing ecological topics for evaluating environmental health. Here, bird species richness denotes the number of unique bird species in a particular area. Factors affecting the investigation of bird species richness include weather, observation bias, and most importantly, the prohibitive costs of conducting surveys at large spatiotemporal scales. Thanks to advances in recording techniques, these problems have been alleviated by deploying sensors for acoustic data collection. Although automated detection techniques have been introduced to identify various bird species, the innate complexity of bird vocalizations, the background noise present in the recording and the escalating volumes of acoustic data pose a challenging task on determination of bird species richness. In this paper we proposed a two-step computer-assisted sampling approach for determining bird species richness in one-day acoustic data. First, a classification model is built based on acoustic indices for filtering out minutes that contain few bird species. Then the classified bird minutes are ordered by an acoustic index and the redundant temporal minutes are removed from the ranked minute sequence. The experimental results show that our method is more efficient in directing experts for determination of bird species compared with the previous methods.
Resumo:
Lentiviral vectors pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G) are emerging as the vectors of choice for in vitro and in vivo gene therapy studies. However, the current method for harvesting lentivectors relies upon ultracentrifugation at 50 000 g for 2 h. At this ultra-high speed, rotors currently in use generally have small volume capacity. Therefore, preparations of large volumes of high-titre vectors are time-consuming and laborious to perform. In the present study, viral vector supernatant harvests from vector-producing cells (VPCs) were pre-treated with various amounts of poly-L-lysine (PLL) and concentrated by low speed centrifugation. Optimal conditions were established when 0.005% of PLL (w/v) was added to vector supernatant harvests, followed by incubation for 30 min and centrifugation at 10 000 g for 2 h at 4 degreesC. Direct comparison with ultracentrifugation demonstrated that the new method consistently produced larger volumes (6 ml) of high-titre viral vector at 1 x 10(8) transduction unit (TU)/ml (from about 3000 ml of supernatant) in one round of concentration. Electron microscopic analysis showed that PLL/viral vector formed complexes, which probably facilitated easy precipitation at low-speed concentration (10 000 g), a speed which does not usually precipitate viral particles efficiently. Transfection of several cell lines in vitro and transduction in vivo in the liver with the lentivector/PLL complexes demonstrated efficient gene transfer without any significant signs of toxicity. These results suggest that the new method provides a convenient means for harvesting large volumes of high-titre lentivectors, facilitate gene therapy experiments in large animal or human gene therapy trials, in which large amounts of lentiviral vectors are a prerequisite.
Resumo:
Rank-based inference is widely used because of its robustness. This article provides optimal rank-based estimating functions in analysis of clustered data with random cluster effects. The extensive simulation studies carried out to evaluate the performance of the proposed method demonstrate that it is robust to outliers and is highly efficient given the existence of strong cluster correlations. The performance of the proposed method is satisfactory even when the correlation structure is misspecified, or when heteroscedasticity in variance is present. Finally, a real dataset is analyzed for illustration.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Resumo:
In treatment comparison experiments, the treatment responses are often correlated with some concomitant variables which can be measured before or at the beginning of the experiments. In this article, we propose schemes for the assignment of experimental units that may greatly improve the efficiency of the comparison in such situations. The proposed schemes are based on general ranked set sampling. The relative efficiency and cost-effectiveness of the proposed schemes are studied and compared. It is found that some proposed schemes are always more efficient than the traditional simple random assignment scheme when the total cost is the same. Numerical studies show promising results using the proposed schemes.
Resumo:
This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.
Resumo:
In this paper we tackle the problem of efficient video event detection. We argue that linear detection functions should be preferred in this regard due to their scalability and efficiency during estimation and evaluation. A popular approach in this regard is to represent a sequence using a bag of words (BOW) representation due to its: (i) fixed dimensionality irrespective of the sequence length, and (ii) its ability to compactly model the statistics in the sequence. A drawback to the BOW representation, however, is the intrinsic destruction of the temporal ordering information. In this paper we propose a new representation that leverages the uncertainty in relative temporal alignments between pairs of sequences while not destroying temporal ordering. Our representation, like BOW, is of a fixed dimensionality making it easily integrated with a linear detection function. Extensive experiments on CK+, 6DMG, and UvA-NEMO databases show significant performance improvements across both isolated and continuous event detection tasks.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.