947 resultados para method of successive averages


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experimental investigation will be performed on the thermocapillary motion of two bubbles in Chinese return-satellite. The experiment will study the migration process of bubble caused by thermocapillary effect in microgravity environment, and their interaction between two bubbles. The bubble is driven by the thermocapillary stress on the surface on account on the variation of the surface tension with temperature. The interaction between two bubbles becomes significant as the separation distance between them is reduced drastically so that the bubble interaction has to be considered. Recently, the problem has been discussed on the method of successive reflections, and accurate migration velocities of two arbitrarily oriented bubbles were derived for the limit of small Marangoni and Reynolds numbers. Numerical results for the migration of the two bubbles show that the interaction between two bubbles has significant influence on their thermocapillary migration velocities with a bubble approaching another. However, there is a lack of experimental validate for the theoretic results. Now the experimental facility is designed for experimenting time after time. A cone-shaped top cover is used to expel bubble from the cell after experiment. But, the cone-shaped top cover can cause temperature uniformity on horizontal plane in whole cell. Therefore, a metal board with multi-holes is fixed under the top cover. The board is able to let the temperature distribution on the board uniform because of their high heat conductivity, and the bubble can pass through it. In the system two bubbles are injected into the test cell respectively by two sets of cylinder. And the bubbles sizes are controlled by two sets of step-by-step motor. It is very important problem that bubble can be divorced from the injecting mouth in microgravity environment. Thus, other two sets of device for injecting mother liquid were used to push bubble. The working principle of injecting mother liquid is to utilize pressure difference directly between test cell and reservoir

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The biological control of the plants diseases, caused by fungi, is carried out using others organisms (predator, parasite or pathogen). Among the possible agents of biocontrol, a fungus has been highlighting as promising and it is known as Clonostachys rosea, asexual form of Bionectria ochroleuca. For that, it is necessary the in vitro production of spores of this fungus. In this study were tested several culture media to select those with better conidia production. The study was conducted at Plant Protection Division, in the Plant Production Department, FCA - UNESP, Botucatu campus, São Paulo state, Brazil. We used the isolated CCR64 (Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA)-CNPMA). The means of crops were: BDA; Oats-Agar; Mazeina-Agar; Rice-Agar; V8-5%; V8-10%; V8-20%; TJ-5 %; TJ-10, TJ-20%. The sporulation of the fungus in different culture media was estimated at 8 days after of the incubation. The data were analyzed using method of comparing averages, using the Tukey test, at 5% probability, and the data processed, using (X + 1) 0.5 transformation. All culture media tested were able to produce conidia. It was found that the best culture media for production of conidia of Bionectria ochroleuca is the TJ-5%, followed by TJ-20%, with an sporulation average of 3,5 x 106 conidia / ml.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Docência para a Educação Básica - FC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polar codes are one of the most recent advancements in coding theory and they have attracted significant interest. While they are provably capacity achieving over various channels, they have seen limited practical applications. Unfortunately, the successive nature of successive cancellation based decoders hinders fine-grained adaptation of the decoding complexity to design constraints and operating conditions. In this paper, we propose a systematic method for enabling complexity-performance trade-offs by constructing polar codes based on an optimization problem which minimizes the complexity under a suitably defined mutual information based performance constraint. Moreover, a low-complexity greedy algorithm is proposed in order to solve the optimization problem efficiently for very large code lengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four problems of physical interest have been solved in this thesis using the path integral formalism. Using the trigonometric expansion method of Burton and de Borde (1955), we found the kernel for two interacting one dimensional oscillators• The result is the same as one would obtain using a normal coordinate transformation, We next introduced the method of Papadopolous (1969), which is a systematic perturbation type method specifically geared to finding the partition function Z, or equivalently, the Helmholtz free energy F, of a system of interacting oscillators. We applied this method to the next three problems considered• First, by summing the perturbation expansion, we found F for a system of N interacting Einstein oscillators^ The result obtained is the same as the usual result obtained by Shukla and Muller (1972) • Next, we found F to 0(Xi)f where A is the usual Tan Hove ordering parameter* The results obtained are the same as those of Shukla and Oowley (1971), who have used a diagrammatic procedure, and did the necessary sums in Fourier space* We performed the work in temperature space• Finally, slightly modifying the method of Papadopolous, we found the finite temperature expressions for the Debyecaller factor in Bravais lattices, to 0(AZ) and u(/K/ j,where K is the scattering vector* The high temperature limit of the expressions obtained here, are in complete agreement with the classical results of Maradudin and Flinn (1963) .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To understand the diffusion of high technology products such as PCs, digital cameras and DVD players it is necessary to consider the dynamics of successive generations of technology. From the consumer’s perspective, these technology changes may manifest themselves as either a new generation product substituting for the old (for instance digital cameras) or as multiple generations of a single product (for example PCs). To date, research has been confined to aggregate level sales models. These models consider the demand relationship between one generation of a product and a successor generation. However, they do not give insights into the disaggregate-level decisions by individual households – whether to adopt the newer generation, and if so, when. This paper makes two contributions. It is the first large scale empirical study to collect household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in contrast to traditional analysis in diffusion research that conceptualizes technology substitution as an “adoption of innovation” type process, we propose that from a consumer’s perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing generation I product with generation II). Key Propositions In some cases, successive generations are clear “substitutes” for the earlier generation (e.g. PCs Pentium I to II to III ). More commonly the new generation II technology is a “partial substitute” for existing generation I technology (e.g. DVD players and VCRs). Some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Moreover, drawing on adoption theory consumer innovativeness is the most important consumer characteristic for adoption timing of new products. Hence, we hypothesize consumer innovativeness to influence the timing of both additional and substitute generation II purchases but to have a stronger impact on additional generation II purchases. We further propose that substitute generation II purchases act partially as a replacement purchase for the generation I product. Thus, we hypothesize that households with older generation I products will make substitute generation II purchases earlier. Methods We employ Cox hazard modeling to study factors influencing the timing of a household’s adoption of generation II products. A separate hazard model is conducted for additional and substitute purchases. The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include size and income of household, age and education of decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases and substitute purchases. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD players and a strong influence for PCs/notebooks. Yet, also as hypothesized, there was no influence on additional purchases. This implies that there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. For substitute purchases, product age is a key driver. Therefore marketers of high technology products can utilize data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The cornea is known to be susceptible to forces exerted by eyelids. There have been previous attempts to quantify eyelid pressure but the reliability of the results is unclear. The purpose of this study was to develop a technique using piezoresistive pressure sensors to measure upper eyelid pressure on the cornea. Methods: The technique was based on the use of thin (0.18 mm) tactile piezoresistive pressure sensors, which generate a signal related to the applied pressure. A range of factors that influence the response of this pressure sensor were investigated along with the optimal method of placing the sensor in the eye. Results: Curvature of the pressure sensor was found to impart force, so the sensor needed to remain flat during measurements. A large rigid contact lens was designed to have a flat region to which the sensor was attached. To stabilise the contact lens during measurement, an apparatus was designed to hold and position the sensor and contact lens combination on the eye. A calibration system was designed to apply even pressure to the sensor when attached to the contact lens, so the raw digital output could be converted to actual pressure units. Conclusions: Several novel procedures were developed to use tactile sensors to measure eyelid pressure. The quantification of eyelid pressure has a number of applications including eyelid reconstructive surgery and the design of soft and rigid contact lenses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Green energy is one of the key factors, driving down electricity bill and zero carbon emission generating electricity to green building. However, the climate change and environmental policies are accelerating people to use renewable energy instead of coal-fired (convention type) energy for green building that energy is not environmental friendly. Therefore, solar energy is one of the clean energy solving environmental impact and paying less in electricity fee. The method of solar energy is collecting sun from solar array and saves in battery from which provides necessary electricity to whole house with zero carbon emission. However, in the market a lot of solar arrays suppliers, the aims of this paper attempted to use superiority and inferiority multi-criteria ranking (SIR) method with 13 constraints establishing I-flows and S-flows matrices to evaluate four alternatives solar energies and determining which alternative is the best, providing power to sustainable building. Furthermore, SIR is well-known structured approach of multi-criteria decision support tools and gradually used in construction and building. The outcome of this paper significantly gives an indication to user selecting solar energy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Establishing age-at-death for skeletal remains is a vital component of forensic anthropology. The Suchey-Brooks (S-B) method of age estimation has been widely utilised since 1986 and relies on a visual assessment of the pubic symphyseal surface in comparison to a series of casts. Inter-population studies (Kimmerle et al., 2005; Djuric et al., 2007; Sakaue, 2006) demonstrate limitations of the S-B method, however, no assessment of this technique specific to Australian populations has been published. Aim: This investigation assessed the accuracy and applicability of the S-B method to an adult Australian Caucasian population by highlighting error rates associated with this technique. Methods: Computed tomography (CT) and contact scans of the S-B casts were performed; each geometrically modelled surface was extracted and quantified for reference purposes. A Queensland skeletal database for Caucasian remains aged 15 – 70 years was initiated at the Queensland Health Forensic and Scientific Services – Forensic Pathology Mortuary (n=350). Three-dimensional reconstruction of the bone surface using innovative volume visualisation protocols in Amira® and Rapidform® platforms was performed. Samples were allocated into 11 sub-sets of 5-year age intervals and changes associated with the surface geometry were quantified in relation to age, gender and asymmetry. Results: Preliminary results indicate that computational analysis was successfully applied to model morphological surface changes. Significant differences in observed versus actual ages were noted. Furthermore, initial morphological assessment demonstrates significant bilateral asymmetry of the pubic symphysis, which is unaccounted for in the S-B method. These results propose refinements to the S-B method, when applied to Australian casework. Conclusion: This investigation promises to transform anthropological analysis to be more quantitative and less invasive using CT imaging. The overarching goal contributes to improving skeletal identification and medico-legal death investigation in the coronial process by narrowing the range of age-at-death estimation in a biological profile.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method of producing porous complex oxides includes the steps of providing a mixt. of (a) precursor elements suitable to produce the complex oxide, or (b) one or more precursor elements suitable to produce particles of the complex oxide and one or more metal oxide particles; and (c) a particulate carbon-contg. pore-forming material selected to provide pore sizes in the range of 7-250 nm, and treating the mixt. to (i) form the porous complex oxide in which two or more of the precursor elements from (a) above or one or more of the precursor elements and one or more of the metals in the metal oxide particles from (b) above are incorporated into a phase of the complex metal oxide and the complex metal oxide has grain sizes in the range of 1-150 nm, and (ii) removing the pore-forming material under conditions such that the porous structure and compn. of the complex oxide is substantially preserved. The method may be used to produce nonrefractory metal oxides as well. The mixt. further includes a surfactant, or a polymer. [on SciFinder(R)]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reported homocysteine (HCY) concentrations in human serum show poor concordance amongst laboratories due to endogenous HCY in the matrices used for assay calibrators and QCs. Hence, we have developed a fully validated LC–MS/MS method for measurement of HCY concentrations in human serum samples that addresses this issue by minimising matrix effects. We used small volumes (20 μL) of 2% Bovine Serum Albumin (BSA) as surrogate matrix for making calibrators and QCs with concentrations adjusted for the endogenous HCY concentration in the surrogate matrix using the method of standard additions. To aliquots (20 μL) of human serum samples, calibrators or QCs, were added HCY-d4 (internal standard) and tris-(2-carboxyethyl) phosphine hydrochloride (TCEP) as reducing agent. After protein precipitation, diluted supernatants were injected into the LC–MS/MS. Calibration curves were linear; QCs were accurate (5.6% deviation from nominal), precise (CV% ≤ 9.6%), stable for four freeze–thaw cycles, and when stored at room temperature for 5 h or at −80 °C (27 days). Recoveries from QCs in surrogate matrix or pooled human serum were 91.9 and 95.9%, respectively. There was no matrix effect using 6 different individual serum samples including one that was haemolysed. Our LC–MS/MS method has satisfied all of the validation criteria of the 2012 EMA guideline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Porosity is one of the key parameters of the macroscopic structure of porous media, generally defined as the ratio of the free spaces occupied (by the volume of air) within the material to the total volume of the material. Porosity is determined by measuring skeletal volume and the envelope volume. Solid displacement method is one of the inexpensive and easy methods to determine the envelope volume of a sample with an irregular shape. In this method, generally glass beads are used as a solid due to their uniform size, compactness and fluidity properties. The smaller size of the glass beads means that they enter into the open pores which have a larger diameter than the glass beads. Although extensive research has been carried out on porosity determination using displacement method, no study exists which adequately reports micro-level observation of the sample during measurement. This study set out with the aim of assessing the accuracy of solid displacement method of bulk density measurement of dried foods by micro-level observation. Solid displacement method of porosity determination was conducted using a cylindrical vial (cylindrical plastic container) and 57 µm glass beads in order to measure the bulk density of apple slices at different moisture contents. A scanning electron microscope (SEM), a profilometer and ImageJ software were used to investigate the penetration of glass beads into the surface pores during the determination of the porosity of dried food. A helium pycnometer was used to measure the particle density of the sample. Results show that a significant number of pores were large enough to allow the glass beads to enter into the pores, thereby causing some erroneous results. It was also found that coating the dried sample with appropriate coating material prior to measurement can resolve this problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A continuum method of analysis is presented in this paper for the problem of a smooth rigid pin in a finite composite plate subjected to uniaxial loading. The pin could be of interference, push or clearance fit. The plate is idealized to an orthotropic sheet. As the load on the plate is progressively increased, the contact along the pin-hole interface is partial above certain load levels in all three types of fit. In misfit pins (interference or clearance), such situations result in mixed boundary value problems with moving boundaries and in all of them the arc of contact and the stress and displacement fields vary nonlinearly with the applied load. In infinite domains similar problems were analysed earlier by ‘inverse formulation’ and, now, the same approach is selected for finite plates. Finite outer domains introduce analytical complexities in the satisfaction of boundary conditions. These problems are circumvented by adopting a method in which the successive integrals of boundary error functions are equated to zero. Numerical results are presented which bring out the effects of the rectangular geometry and the orthotropic property of the plate. The present solutions are the first step towards the development of special finite elements for fastener joints.