914 resultados para purely sequential procedure
Resumo:
A new procedure for spectrofluorimetric determination of free and total glycerol in biodiesel samples is presented. It is based on the oxidation of glycerol by periodate, forming formaldehyde, which reacts with acetylacetone, producing the luminescent 3,5-diacetyl-1,4-dihydrolutidine. A flow system with solenoid micro-pumps is proposed for solution handling. Free glycerol was extracted off-line from biodiesel samples with water, and total glycerol was converted to free glycerol by saponification with sodium ethylate under sonication. For free glycerol, a linear response was observed from 5 to 70 mg L(-1) with a detection limit of 0.5 mg L(-1), which corresponds to 2 mg kg(-1) in biodiesel. The coefficient of variation was 0.9% (20 mg L(-1), n = 10). For total glycerol, samples were diluted on-line, and the linear response range was 25 to 300 mg L(-1). The detection limit was 1.4 mg L(-1) (2.8 mg kg(-1) in biodiesel) with a coefficient of variation of 1.4% (200 mg L(-1), n = 10). The sampling rate was ca. 35 samples h(-1) and the procedure was applied to determination of free and total glycerol in biodiesel samples from soybean, cottonseed, and castor beans.
Resumo:
An improved flow-based procedure is proposed for turbidimetric sulphate determination in waters. The flow system was designed with solenoid micro-pumps in order to improve mixing conditions and minimize reagent consumption as well as waste generation. Stable baselines were observed in view of the pulsed flow characteristic of the systems designed with solenoid micro-pumps, thus making the use of washing solutions unnecessary. The nucleation process was improved by stopping the flow prior to the measurement, thus avoiding the need of sulphate addition. When a 1-cm optical path flow cell was employed, linear response was achieved within 20-200 mg L(-1), described by the equation S = -0.0767 + 0.00438C (mg L(-1)), r = 0.999. The detection limit was estimated as 3 mg L(-1) at the 99.7% confidence level and the coefficient of variation was 2.4% (n = 20). The sampling rate was estimated as 33 determinations per hour. A long pathlength (100-cm) flow cell based on a liquid core waveguide was exploited to increase sensitivity in turbidimetry. Baseline drifts were avoided by a periodical washing step with EDTA in alkaline medium. Linear response was observed within 7-16 mg L(-1), described by the equation S = -0.865 + 0.132C (mg L(-1)), r = 0.999. The detection limit was estimated as 150 mu g L(-1) at the 99.7% confidence level and the coefficient of variation was 3.0% (n = 20). The sampling rate was estimated as 25 determinations per hour. The results obtained for freshwater and rain water samples were in agreement with those achieved by batch turbidimetry at the 95% confidence level. (C) 2008 Elsevier B.V All rights reserved.
Diagnostic errors and repetitive sequential classifications in on-line process control by attributes
Resumo:
The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
In experiments with two-person sequential games we analyzewhether responses to favorable and unfavorable actions dependon the elicitation procedure. In our hot treatment thesecond player responds to the first player s observed actionwhile in our cold treatment we follow the strategy method and have the second player decide on a contingent action foreach and every possible first player move, without firstobserving this move. Our analysis centers on the degree towhich subjects deviate from the maximization of their pecuniaryrewards, as a response to others actions. Our results show nodifference in behavior between the two treatments. We also findevidence of the stability of subjects preferences with respectto their behavior over time and to the consistency of theirchoices as first and second mover.
Resumo:
Calcitic nanofibres are ubiquitous habits of sec- ondary calcium carbonate (CaCO3 ) accumulations observed in calcareous vadose environments. Despite their widespread occurrence, the origin of these nanofeatures remains enig- matic. Three possible mechanisms fuel the debate: (i) purely physicochemical processes, (ii) mineralization of rod-shaped bacteria, and (iii) crystal precipitation on organic templates. Nanofibres can be either mineral (calcitic) or organic in na- ture. They are very often observed in association with needle fibre calcite (NFC), another typical secondary CaCO3 habit in terrestrial environments. This association has contributed to some confusion between both habits, however they are truly two distinct calcitic features and their recurrent asso- ciation is likely to be an important fact to help understanding the origin of nanofibres. In this paper the different hypotheses that currently exist to explain the origin of calcitic nanofibres are critically reviewed. In addition to this, a new hypothe- sis for the origin of nanofibres is proposed based on the fact that current knowledge attributes a fungal origin to NFC. As this feature and nanofibres are recurrently observed together, a possible fungal origin for nanofibres which are associated with NFC is investigated. Sequential enzymatic digestion of the fungal cell wall of selected fungal species demonstrates that the fungal cell wall can be a source of organic nanofibres. The obtained organic nanofibres show a striking morpho- logical resemblance when compared to their natural coun- terparts, emphasizing a fungal origin for part of the organic nanofibres observed in association with NFC. It is further hy- pothesized that these organic nanofibres may act as templates for calcite nucleation in a biologically influenced mineraliza- tion process, generating calcitic nanofibres. This highlights the possible involvement of fungi in CaCO3 biomineraliza- tion processes, a role still poorly documented. Moreover, on a global scale, the organomineralization of organic nanofi- bres into calcitic nanofibres might be an overlooked process deserving more attention to specify its impact on the biogeo- chemical cycles of both Ca and C.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
(from the journal abstract) Scientific interest for the concept of alliance has been maintained and stimulated by repeated findings that a strong alliance is associated with facilitative treatment process and favourable treatment outcome. However, because the alliance is not in itself a therapeutic technique, these findings were unsuccessful in bringing about significant improvements in clinical practice. An essential issue in modern psychotherapeutic research concerns the relation between common factors which are known to explain great variance in empirical results and the specific therapeutic techniques which are the primary basis of clinical training and practice. This pilot study explored sequences in therapist interventions over four sessions of brief psychodynamic investigation. It aims at determining if patterns of interventions can be found during brief psychodynamic investigation and if these patterns can be associated with differences in the therapeutic alliance. Therapist interventions where coded using the Psychodynamic Intervention Rating Scale (PIRS) which enables the classification of each therapist utterance into one of 9 categories of interpretive interventions (defence interpretation, transference interpretation), supportive interventions (question, clarification, association, reflection, supportive strategy) or interventions about the therapeutic frame (work-enhancing statement, contractual arrangement). Data analysis was done using lag sequential analysis, a statistical procedure which identifies contingent relationships in time among a large number of behaviours. The sample includes N = 20 therapist-patient dyads assigned to three groups with: (1) a high and stable alliance profile, (2) a low and stable alliance profile and (3) an improving alliance profile. Results suggest that therapists most often have one single intention when interacting with patients. Large sequences of questions, associations and clarifications were found, which indicate that if a therapist asks a question, clarifies or associates, there is a significant probability that he will continue doing so. A single theme sequence involving frame interventions was also observed. These sequences were found in all three alliance groups. One exception was found for mixed sequences of interpretations and supportive interventions. The simultaneous use of these two interventions was associated with a high or an improving alliance over the course of treatment, but not with a low and stable alliance where only single theme sequences of interpretations were found. In other words, in this last group, therapists were either supportive or interpretative, whereas with high or improving alliance, interpretations were always given along with supportive interventions. This finding provides evidence that examining therapist interpretation individually can only yield incomplete findings. How interpretations were given is important for alliance building. It also suggests that therapists should carefully dose their interpretations and be supportive when necessary in order to build a strong therapeutic alliance. And from a research point of view, to study technical interventions, we must look into dynamic variables such as dosage, the supportive quality of an intervention, and timing. (PsycINFO Database Record (c) 2005 APA, all rights reserved)
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
This work proposes a sequential injection analysis (SIA) system for the spectrophotometric determination of norfloxacin (NOR) and ciprofloxacin (CIP) in pharmaceutical formulations. The methodology was based on the reaction of these drugs with p-(dimethylamino)cinnamaldehyde in micellar medium, producing orange colored products (λmax = 495 nm). Beer´s law was obeyed in the concentration range from 2.75x10-5 to 3.44x10-4 mol L-1 and 3.26x10-5 to 3.54x10-4 mol L-1 for NOR and CIP, respectively and sampling rate was 25 h-1. Commercial samples were analyzed and results obtained through the proposed method were in good agreement with those obtained using the reference procedure for a 95% confidence level.
Resumo:
A qualitative spot-test and tandem quantitative analysis of dipyrone in the bulk drug and in pharmaceutical preparations is proposed. The formation of a reddish-violet color indicates a positive result. In sequence a quantitative procedure can be performed in the same flask. The quantitative results obtained were statistically compared with those obtained with the method indicated by the Brazilian Pharmacopoeia, using the Student's t and the F tests. Considering the concentration in a 100 µL aliquot, the qualitative visual limit of detection is about 5×10-6 g; instrumental LOD ≅ 1.4×10-4 mol L-1 ; LOQ ≅ 4.5×10-4 mol L-1.
Resumo:
This paper describes a new statistical, model-based approach to building a contact state observer. The observer uses measurements of the contact force and position, and prior information about the task encoded in a graph, to determine the current location of the robot in the task configuration space. Each node represents what the measurements will look like in a small region of configuration space by storing a predictive, statistical, measurement model. This approach assumes that the measurements are statistically block independent conditioned on knowledge of the model, which is a fairly good model of the actual process. Arcs in the graph represent possible transitions between models. Beam Viterbi search is used to match measurement history against possible paths through the model graph in order to estimate the most likely path for the robot. The resulting approach provides a new decision process that can be use as an observer for event driven manipulation programming. The decision procedure is significantly more robust than simple threshold decisions because the measurement history is used to make decisions. The approach can be used to enhance the capabilities of autonomous assembly machines and in quality control applications.
Resumo:
Pharmacovigilance, the monitoring of adverse events (AEs), is an integral part in the clinical evaluation of a new drug. Until recently, attempts to relate the incidence of AEs to putative causes have been restricted to the evaluation of simple demographic and environmental factors. The advent of large-scale genotyping, however, provides an opportunity to look for associations between AEs and genetic markers, such as single nucleotides polymorphisms (SNPs). It is envisaged that a very large number of SNPs, possibly over 500 000, will be used in pharmacovigilance in an attempt to identify any genetic difference between patients who have experienced an AE and those who have not. We propose a sequential genome-wide association test for analysing AEs as they arise, allowing evidence-based decision-making at the earliest opportunity. This gives us the capability of quickly establishing whether there is a group of patients at high-risk of an AE based upon their DNA. Our method provides a valid test which takes account of linkage disequilibrium and allows for the sequential nature of the procedure. The method is more powerful than using a correction, such as idák, that assumes that the tests are independent. Copyright © 2006 John Wiley & Sons, Ltd.