945 resultados para Carbothermal Reduction Method
Resumo:
Recently, many new applications in engineering and science are governed by a series of fractional partial differential equations (FPDEs). Unlike the normal partial differential equations (PDEs), the differential order in a FPDE is with a fractional order, which will lead to new challenges for numerical simulation, because most existing numerical simulation techniques are developed for the PDE with an integer differential order. The current dominant numerical method for FPDEs is Finite Difference Method (FDM), which is usually difficult to handle a complex problem domain, and also hard to use irregular nodal distribution. This paper aims to develop an implicit meshless approach based on the moving least squares (MLS) approximation for numerical simulation of fractional advection-diffusion equations (FADE), which is a typical FPDE. The discrete system of equations is obtained by using the MLS meshless shape functions and the meshless strong-forms. The stability and convergence related to the time discretization of this approach are then discussed and theoretically proven. Several numerical examples with different problem domains and different nodal distributions are used to validate and investigate accuracy and efficiency of the newly developed meshless formulation. It is concluded that the present meshless formulation is very effective for the modeling and simulation of the FADE.
Resumo:
In the multi-view approach to semisupervised learning, we choose one predictor from each of multiple hypothesis classes, and we co-regularize our choices by penalizing disagreement among the predictors on the unlabeled data. We examine the co-regularization method used in the co-regularized least squares (CoRLS) algorithm, in which the views are reproducing kernel Hilbert spaces (RKHS's), and the disagreement penalty is the average squared difference in predictions. The final predictor is the pointwise average of the predictors from each view. We call the set of predictors that can result from this procedure the co-regularized hypothesis class. Our main result is a tight bound on the Rademacher complexity of the co-regularized hypothesis class in terms of the kernel matrices of each RKHS. We find that the co-regularization reduces the Rademacher complexity by an amount that depends on the distance between the two views, as measured by a data dependent metric. We then use standard techniques to bound the gap between training error and test error for the CoRLS algorithm. Experimentally, we find that the amount of reduction in complexity introduced by co regularization correlates with the amount of improvement that co-regularization gives in the CoRLS algorithm.
Resumo:
The antiretroviral therapy (ART) program for People Living with HIV/AIDS (PLHIV) in Vietnam has been scaled up rapidly in recent years (from 50 clients in 2003 to almost 38,000 in 2009). ART success is highly dependent on the ability of the patients to fully adhere to the prescribed treatment regimen. Despite the remarkable extension of ART programs in Vietnam, HIV/AIDS program managers still have little reliable data on levels of ART adherence and factors that might promote or reduce adherence. Several previous studies in Vietnam estimated extremely high levels of ART adherence among their samples, although there are reasons to question the veracity of the conclusion that adherence is nearly perfect. Further, no study has quantitatively assessed the factors influencing ART adherence. In order to reduce these gaps, this study was designed to include several phases and used a multi-method approach to examine levels of ART non-adherence and its relationship to a range of demographic, clinical, social and psychological factors. The study began with an exploratory qualitative phase employing four focus group discussions and 30 in-depth interviews with PLHIV, peer educators, carers and health care providers (HCPs). Survey interviews were completed with 615 PLHIV in five rural and urban out-patient clinics in northern Vietnam using an Audio Computer Assisted Self-Interview (ACASI) and clinical records extraction. The survey instrument was carefully developed through a systematic procedure to ensure its reliability and validity. Cultural appropriateness was considered in the design and implementation of both the qualitative study and the cross sectional survey. The qualitative study uncovered several contrary perceptions between health care providers and HIV/AIDS patients regarding the true levels of ART adherence. Health care providers often stated that most of their patients closely adhered to their regimens, while PLHIV and their peers reported that “it is not easy” to do so. The quantitative survey findings supported the PLHIV and their peers’ point of view in the qualitative study, because non-adherence to ART was relatively common among the study sample. Using the ACASI technique, the estimated prevalence of onemonth non-adherence measured by the Visual Analogue Scale (VAS) was 24.9% and the prevalence of four-day not-on-time-adherence using the modified Adult AIDS Clinical Trials Group (AACTG) instrument was 29%. Observed agreement between the two measures was 84% and kappa coefficient was 0.60 (SE=0.04 and p<0.0001). The good agreement between the two measures in the current study is consistent with those found in previous research and provides evidence of cross-validation of the estimated adherence levels. The qualitative study was also valuable in suggesting important variables for the survey conceptual framework and instrument development. The survey confirmed significant correlations between two measures of ART adherence (i.e. dose adherence and time adherence) and many factors identified in the qualitative study, but failed to find evidence of significant correlations of some other factors and ART adherence. Non-adherence to ART was significantly associated with untreated depression, heavy alcohol use, illicit drug use, experiences with medication side-effects, chance health locus of control, low quality of information from HCPs, low satisfaction with received support and poor social connectedness. No multivariate association was observed between ART adherence and age, gender, education, duration of ART, the use of adherence aids, disclosure of ART, patients’ ability to initiate communication with HCPs or distance between clinic and patients’ residence. This is the largest study yet reported in Asia to examine non-adherence to ART and its possible determinants. The evidence strongly supports recent calls from other developing nations for HIV/AIDS services to provide screening, counseling and treatment for patients with depressive symptoms, heavy use of alcohol and substance use. Counseling should also address fatalistic beliefs about chance or luck determining health outcomes. The data suggest that adherence could be enhanced by regularly providing information on ART and assisting patients to maintain social connectedness with their family and the community. This study highlights the benefits of using a multi-method approach in examining complex barriers and facilitators of medication adherence. It also demonstrated the utility of the ACASI interview method to enhance open disclosure by people living with HIV/AIDS and thus, increase the veracity of self-reported data.
Resumo:
This paper reports the feasibility and methodological considerations of using the Short Message System Experience Sampling (SMS-ES) Method, which is an experience sampling research method developed to assist researchers to collect repeat measures of consumers’ affective experiences. The method combines SMS with web-based technology in a simple yet effective way. It is described using a practical implementation study that collected consumers’ emotions in response to using mobile phones in everyday situations. The method is further evaluated in terms of the quality of data collected in the study, as well as against the methodological considerations for experience sampling studies. These two evaluations suggest that the SMS-ES Method is both a valid and reliable approach for collecting consumers’ affective experiences. Moreover, the method can be applied across a range of for-profit and not-for-profit contexts where researchers want to capture repeated measures of consumers’ affective experiences occurring over a period of time. The benefits of the method are discussed to assist researchers who wish to apply the SMS-ES Method in their own research designs.
Resumo:
The stochastic simulation algorithm was introduced by Gillespie and in a different form by Kurtz. There have been many attempts at accelerating the algorithm without deviating from the behavior of the simulated system. The crux of the explicit τ-leaping procedure is the use of Poisson random variables to approximate the number of occurrences of each type of reaction event during a carefully selected time period, τ. This method is acceptable providing the leap condition, that no propensity function changes “significantly” during any time-step, is met. Using this method there is a possibility that species numbers can, artificially, become negative. Several recent papers have demonstrated methods that avoid this situation. One such method classifies, as critical, those reactions in danger of sending species populations negative. At most, one of these critical reactions is allowed to occur in the next time-step. We argue that the criticality of a reactant species and its dependent reaction channels should be related to the probability of the species number becoming negative. This way only reactions that, if fired, produce a high probability of driving a reactant population negative are labeled critical. The number of firings of more reaction channels can be approximated using Poisson random variables thus speeding up the simulation while maintaining the accuracy. In implementing this revised method of criticality selection we make use of the probability distribution from which the random variable describing the change in species number is drawn. We give several numerical examples to demonstrate the effectiveness of our new method.
Resumo:
We consider a stochastic regularization method for solving the backward Cauchy problem in Banach spaces. An order of convergence is obtained on sourcewise representative elements.
Resumo:
Graphene, functionalized with oleylamine (OA) and soluble in non-polar organic solvents, was produced on a large scale with a high yield by combining the Hummers process for graphite oxidation, an amine-coupling process to make OA-functionalized graphite oxide (OA-GO), and a novel reduction process using trioctylphosphine (TOP). TOP acts as both a reducing agent and an aggregation-prevention surfactant in the reduction of OA-GO in 1,2-dichlorobenzene (DCB). The reduction of OA-GO is confirmed by X-ray photoelectron spectroscopy, Fourier-transform infrared spectroscopy, X-ray diffraction, thermogravimetric analysis, and Raman spectroscopy. The exfoliation of GO, OA GO, and OA-functionalized graphene (OA-G) is verified by atomic force microscopy. The conductivity of TOP-reduced OA G, which is deduced from the current–voltage characteristics of a vacuum-filtered thin film, shows that the reduction of functionalized GO by TOP is as effective as the reduction of GO by hydrazine.
Resumo:
This study investigated the Kinaesthetic Fusion Effect (KFE) first described by Craske and Kenny in 1981. The current study did not replicate these findings following a change in the reporting method used by participants. Participants did not perceive any reduction in the sagittal separation of a button pressed by the index finger of one arm and a probe touching the other, following repeated exposure to the tactile stimuli present on both unseen arms. This study’s failure to replicate the widely-cited KFE as described by Craske et al. (1984) suggests that it may be contingent on several aspects of visual information, especially the availability of a specific visual reference, the role of instructions regarding gaze direction, and the potential use of a line of sight strategy when referring felt positions to an interposed surface. In addition, a foreshortening effect was found; this may result from a line-of-sight judgment and represent a feature of the reporting method used. Finally, this research will benefit future studies that require participants to report the perceived locations of the unseen limbs.
Resumo:
Current knowledge about the relationship between transport disadvantage and activity space size is limited to urban areas, and as a result, very little is known about this link in a rural context. In addition, although research has identified transport disadvantaged groups based on their size of activity space, these studies have, however, not empirically explained such differences and the result is often a poor identification of the problems facing disadvantaged groups. Research has shown that transport disadvantage varies over time. The static nature of analysis using the activity space concept in previous research studies has lacked the ability to identify transport disadvantage in time. Activity space is a dynamic concept; and therefore possesses a great potential in capturing temporal variations in behaviour and access opportunities. This research derives measures of the size and fullness of activity spaces for 157 individuals for weekdays, weekends, and for a week using weekly activity-travel diary data from three case study areas located in rural Northern Ireland. Four focus groups were also conducted in order to triangulate quantitative findings and to explain the differences between different socio-spatial groups. The findings of this research show that despite having a smaller sized activity space, individuals were not disadvantaged because they were able to access their required activities locally. Car-ownership was found to be an important life line in rural areas. Temporal disaggregation of the data reveals that this is true only on weekends due to a lack of public transport services. In addition, despite activity spaces being at a similar size, the fullness of activity spaces of low-income individuals was found to be significantly lower compared to their high-income counterparts. Focus group data shows that financial constraint, poor connections both between public transport services and between transport routes and opportunities forced individuals to participate in activities located along the main transport corridors.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.
Resumo:
Compressive Sensing (CS) is a popular signal processing technique, that can exactly reconstruct a signal given a small number of random projections of the original signal, provided that the signal is sufficiently sparse. We demonstrate the applicability of CS in the field of gait recognition as a very effective dimensionality reduction technique, using the gait energy image (GEI) as the feature extraction process. We compare the CS based approach to the principal component analysis (PCA) and show that the proposed method outperforms this baseline, particularly under situations where there are appearance changes in the subject. Applying CS to the gait features also avoids the need to train the models, by using a generalised random projection.
Resumo:
This study investigated the Kinaesthetic Fusion Effect (KFE) first described by Craske and Kenny in 1981. In Experiment 1 the study did not replicate these findings following a change in the reporting method used by participants. Participants did not perceive any reduction in the sagittal separation of a button pressed by the index finger of one arm and a probe touching the other, following repeated exposure to the tactile stimuli present on both unseen arms. This study’s failure to replicate the widely-cited KFE as described by Craske et al. (1984) suggests that it may be contingent on several aspects of visual information, especially the availability of a specific visual reference, the role of instructions regarding gaze direction, and the potential use of a line of sight strategy when referring felt positions to an interposed surface. In addition, a foreshortening effect was found; this may result from a line-of-sight judgment and represent a feature of the reporting method used. Finally, this research will benefit future studies that require participants to report the perceived locations of the unseen limbs. Experiment 2 investigated the KFE when the visual reference was removed and participants made reports of touched position, blindfolded. A number of interesting outcomes arose from this change and may provide clarification to the phenomena.
Resumo:
The World Health Organization recommends that data on mortality in its member countries are collected utilising the Medical Certificate of Cause of Death published in the instruction volume of the ICD-10. However, investment in health information processes necessary to promote the use of this certificate and improve mortality information is lacking in many countries. An appeal for support to make improvements has been launched through the Health Metrics Network’s MOVE-IT strategy (Monitoring of Vital Events – Information Technology) [World Health Organization, 2011]. Despite this international spotlight on the need for capture of mortality data and in the use of the ICD-10 to code the data reported on such certificates, there is little cohesion in the way that certifiers of deaths receive instruction in how to complete the death certificate, which is the main source document for mortality statistics. Complete and accurate documentation of the immediate, underlying and contributory causes of death of the decedent on the death certificate is a requirement to produce standardised statistical information and to the ability to produce cause-specific mortality statistics that can be compared between populations and across time. This paper reports on a research project conducted to determine the efficacy and accessibility of the certification module of the WHO’s newly-developed web based training tool for coders and certifiers of deaths. Involving a population of medical students from the Fiji School of Medicine and a pre and post research design, the study entailed completion of death certificates based on vignettes before and after access to the training tool. The ability of the participants to complete the death certificates and analysis of the completeness and specificity of the ICD-10 coding of the reported causes of death were used to measure the effect of the students’ learning from the training tool. The quality of death certificate completion was assessed using a Quality Index before and after the participants accessed the training tool. In addition, the views of the participants about accessibility and use of the training tool were elicited using a supplementary questionnaire. The results of the study demonstrated improvement in the ability of the participants to complete death certificates completely and accurately according to best practice. The training tool was viewed very positively and its implementation in the curriculum for medical students was encouraged. Participants also recommended that interactive discussions to examine the certification exercises would be an advantage.
Resumo:
In this paper a new graph-theory and improved genetic algorithm based practical method is employed to solve the optimal sectionalizer switch placement problem. The proposed method determines the best locations of sectionalizer switching devices in distribution networks considering the effects of presence of distributed generation (DG) in fitness functions and other optimization constraints, providing the maximum number of costumers to be supplied by distributed generation sources in islanded distribution systems after possible faults. The proposed method is simulated and tested on several distribution test systems in both cases of with DG and non DG situations. The results of the simulations validate the proposed method for switch placement of the distribution network in the presence of distributed generation.