143 resultados para Gaussian extended cubature formula
Resumo:
Numerical study has been performed in this study to investigate the turbulent convection heat transfer on a rectangular plate mounted over a flat surface. Thermal and fluid dynamic performances of extended surfaces having various types of lateral perforations with square, circular, triangular and hexagonal cross sections are investigated. RANS (Reynolds averaged Navier–Stokes) based modified k–ω turbulence model is used to calculate the fluid flow and heat transfer parameters. Numerical results are compared with the results of previously published experimental data and obtained results are in reasonable agreement. Flow and heat transfer parameters are presented for Reynolds numbers from 2000 to 5000 based on the fin thickness.
Resumo:
One of the main challenges facing online and offline path planners is the uncertainty in the magnitude and direction of the environmental energy because it is dynamic, changeable with time, and hard to forecast. This thesis develops an artificial intelligence for a mobile robot to learn from historical or forecasted data of environmental energy available in the area of interest which will help for a persistence monitoring under uncertainty using the developed algorithm.
Resumo:
A crucial issue with hybrid quantum secret sharing schemes is the amount of data that is allocated to the participants. The smaller the amount of allocated data, the better the performance of a scheme. Moreover, quantum data is very hard and expensive to deal with, therefore, it is desirable to use as little quantum data as possible. To achieve this goal, we first construct extended unitary operations by the tensor product of n, n ≥ 2, basic unitary operations, and then by using those extended operations, we design two quantum secret sharing schemes. The resulting dual compressible hybrid quantum secret sharing schemes, in which classical data play a complementary role to quantum data, range from threshold to access structure. Compared with the existing hybrid quantum secret sharing schemes, our proposed schemes not only reduce the number of quantum participants, but also the number of particles and the size of classical shares. To be exact, the number of particles that are used to carry quantum data is reduced to 1 while the size of classical secret shares also is also reduced to l−2 m−1 based on ((m+1, n′)) threshold and to l−2 r2 (where r2 is the number of maximal unqualified sets) based on adversary structure. Consequently, our proposed schemes can greatly reduce the cost and difficulty of generating and storing EPR pairs and lower the risk of transmitting encoded particles.
Resumo:
The Advanced Pharmacy Practice Framework Steering Committee (now replaced by the Pharmacy Practitioner Development Committee) undertook work to develop an advanced pharmacy practice recognition model. As part of that work, and to assure clarity and consistency in the terminology it uses, the Committee collated the definitions used in literature sources consulted. Most recently, this involved a review of the meaning attributed to the terms ‘advanced’ and ‘extended’ when used in the context of describing aspects of professional practice. Both terms encompass the acquisition of additional expertise. While ‘advanced’ practice involves the acquisition of additional expertise to achieve a higher performance level, ‘extended’ practice relates specifically to scope of practice and involves the acquisition of additional expertise sufficient to provide services or perform tasks that are outside the usual scope of practice of the profession. Performance level operates independently of scope of practice but both must be elucidated to fully describe the professional practice of an individual practitioner.
Resumo:
This paper presents an efficient noniterative method for distribution state estimation using conditional multivariate complex Gaussian distribution (CMCGD). In the proposed method, the mean and standard deviation (SD) of the state variables is obtained in one step considering load uncertainties, measurement errors, and load correlations. In this method, first the bus voltages, branch currents, and injection currents are represented by MCGD using direct load flow and a linear transformation. Then, the mean and SD of bus voltages, or other states, are calculated using CMCGD and estimation of variance method. The mean and SD of pseudo measurements, as well as spatial correlations between pseudo measurements, are modeled based on the historical data for different levels of load duration curve. The proposed method can handle load uncertainties without using time-consuming approaches such as Monte Carlo. Simulation results of two case studies, six-bus, and a realistic 747-bus distribution network show the effectiveness of the proposed method in terms of speed, accuracy, and quality against the conventional approach.
Resumo:
To report the outcomes of a randomised educational trial of a new methodology for extended immersion in medical simulation for senior medical students. Clinical Learning through Extended Immersion in Medical Simulation (CLEIMS) is a new methodology for medical student learning. It involves senior students working in teams of 4-5 through the clinical progress of one or more patients over a week, utilising a range of simulation methodologies (simulated patient assessment, simulated significant other briefing, virtual story continuations, pig-trotter wound repair, online simulated on-call modules, interprofessional simulated ward rounds and high fidelity mannequin-based emergency simulations), to enhance learning in associated workshops and seminars. A randomised educational trial comparing the methodology to seminars and workshops alone began in 2010 and interim results were reported at last year’s conference. Updated results are presented here and final primary endpoint outcomes will be available by the time of the conference.
Resumo:
In this paper we describe the design of DNA Jewelry, which is a wearable tangible data representation of personal DNA profile data. An iterative design process was followed to develop a 3D form-language that could be mapped to standard DNA profile data, with the aim of retaining readability of data while also producing an aesthetically pleasing and unique result in the area of personalised design. The work explores design issues with the production of data tangibles, contributes to a growing body of research exploring tangible representations of data and highlights the importance of approaches that move between technology, art and design.
Resumo:
Background Demand for essential plasma-derived products is increasing. Purpose This prospective study aims to identify predictors of voluntary non-remunerated whole blood (WB) donors becoming plasmapheresis donors. Methods Surveys were sent to WB donors who had recently (recent n = 1,957) and not recently donated (distant n = 1,012). Theory of Planned Behavior (TPB) constructs (attitude, subjective norm, self-efficacy) were extended with moral norm, anticipatory regret, and donor identity. Intentions and objective plasmapheresis donation for 527 recent and 166 distant participants were assessed. Results Multi-group analysis revealed that the model was a good fit. Moral norm and self-efficacy were positively associated while role identity (suppressed by moral norm) was negatively associated with plasmapheresis intentions. Conclusions The extended TPB was useful in identifying factors that facilitate conversion from WB to plasmapheresis donation. A superordinate donor identity may be synonymous with WB donation and, for donors with a strong moral norm for plasmapheresis, may inhibit conversion.
Resumo:
We extended genetic linkage analysis - an analysis widely used in quantitative genetics - to 3D images to analyze single gene effects on brain fiber architecture. We collected 4 Tesla diffusion tensor images (DTI) and genotype data from 258 healthy adult twins and their non-twin siblings. After high-dimensional fluid registration, at each voxel we estimated the genetic linkage between the single nucleotide polymorphism (SNP), Val66Met (dbSNP number rs6265), of the BDNF gene (brain-derived neurotrophic factor) with fractional anisotropy (FA) derived from each subject's DTI scan, by fitting structural equation models (SEM) from quantitative genetics. We also examined how image filtering affects the effect sizes for genetic linkage by examining how the overall significance of voxelwise effects varied with respect to full width at half maximum (FWHM) of the Gaussian smoothing applied to the FA images. Raw FA maps with no smoothing yielded the greatest sensitivity to detect gene effects, when corrected for multiple comparisons using the false discovery rate (FDR) procedure. The BDNF polymorphism significantly contributed to the variation in FA in the posterior cingulate gyrus, where it accounted for around 90-95% of the total variance in FA. Our study generated the first maps to visualize the effect of the BDNF gene on brain fiber integrity, suggesting that common genetic variants may strongly determine white matter integrity.
Resumo:
Aim The composition of faecal microbiota of babies is known to be influenced by diet. Faecal calprotectin and α1-antitrypsin concentrations may be associated with mucosal permeability and inflammation. We aimed to assess whether there was any difference after consumption of a probiotic/prebiotic formula on faecal microbiota composition, calprotectin and α1-antitrypsin levels, and diarrhoea in comparison with breast milk-fed Indonesian infants. Methods One hundred sixty infants, 2 to 6 weeks old, were recruited to the study. They were either breastfed or formula fed (80 per group). Faecal samples were collected at recruitment and 3 months later. Bacterial groups characteristic of the human faecal microbiota were quantified in faeces by quantitative polymerase chain reaction. Calprotectin and α1-antitrypsin concentrations were measured using commercial kits. Details of diarrhoeal morbidity were documented and rated for severity. Results The compositions of the faecal microbiota of formula-fed compared with breast milk-fed children were similar except that the probiotic strain Bifidobacterium animalis subsp. lactisâ€...DR10 was more abundant after 3 months consumption of the formula. Alpha1-antitrypsin levels were higher in breastfed compared with formula-fed infants. The occurrence of diarrhoea did not differ between the groups of babies. Conclusion Feeding Indonesian babies with a probiotic/prebiotic formula did not produce marked differences in the composition of the faecal microbiota in comparison with breast milk. Detrimental effects of formula feeding on biomarkers of mucosal health were not observed.
Resumo:
A highly extended dithienothiophene comonomer building block was used in combination with highly fused aromatic furan substituted diketopyrrolopyrrole for the synthesis of novel donor–acceptor alternating copolymer PDPPF-DTT. Upon testing PDPPF-DTT as a channel semiconductor in top contact bottom gate organic field effect transistors (OFETs), it was found to exhibit p-channel behaviour. The highest hole mobility of 3.56 cm2 V−1 s−1 was reported for PDPPF-DTT. To our knowledge, this is the highest mobility reported so far for the furan flanked diketopyrrolopyrrole class of copolymers using conventional device geometry with straightforward processing.
Resumo:
The Codex Alimentarius Commission of the Food and Agriculture Organization of the United Nations (FAO) and the World Health Organization (WHO) develops food standards, guidelines and related texts for protecting consumer health and ensuring fair trade practices globally. The major part of the world's population lives in more than 160 countries that are members of the Codex Alimentarius. The Codex Standard on Infant Formula was adopted in 1981 based on scientific knowledge available in the 1970s and is currently being revised. As part of this process, the Codex Committee on Nutrition and Foods for Special Dietary Uses asked the ESPGHAN Committee on Nutrition to initiate a consultation process with the international scientific community to provide a proposal on nutrient levels in infant formulae, based on scientific analysis and taking into account existing scientific reports on the subject. ESPGHAN accepted the request and, in collaboration with its sister societies in the Federation of International Societies on Pediatric Gastroenterology, Hepatology and Nutrition, invited highly qualified experts in the area of infant nutrition to form an International Expert Group (IEG) to review the issues raised. The group arrived at recommendations on the compositional requirements for a global infant formula standard which are reported here.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
This study examined an aspect of adolescent writing development, specifically whether teaching secondary school students to use strategies to enhance succinctness in their essays changed the grammatical sophistication of their sentences. A quasi-experimental intervention was used to compare changes in syntactic complexity and lexical density between one-draft and polished essays. No link was demonstrated between the intervention and the changes. A thematic analysis of teacher interviews explored links between changes to student texts and teaching approaches. The study has implications for making syntactic complexity an explicit goal of student drafting.
Resumo:
Pseudo-marginal methods such as the grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms have been introduced in the literature as an approach to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we propose to use Gaussian processes (GP) to accelerate the GIMH method, whilst using a short pilot run of MCWM to train the GP. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model.