967 resultados para Sequential analysis
Resumo:
Thermal degradation of PLA is a complex process since it comprises many simultaneous reactions. The use of analytical techniques, such as differential scanning calorimetry (DSC) and thermogravimetry (TGA), yields useful information but a more sensitive analytical technique would be necessary to identify and quantify the PLA degradation products. In this work the thermal degradation of PLA at high temperatures was studied by using a pyrolyzer coupled to a gas chromatograph with mass spectrometry detection (Py-GC/MS). Pyrolysis conditions (temperature and time) were optimized in order to obtain an adequate chromatographic separation of the compounds formed during heating. The best resolution of chromatographic peaks was obtained by pyrolyzing the material from room temperature to 600 °C during 0.5 s. These conditions allowed identifying and quantifying the major compounds produced during the PLA thermal degradation in inert atmosphere. The strategy followed to select these operation parameters was by using sequential pyrolysis based on the adaptation of mathematical models. By application of this strategy it was demonstrated that PLA is degraded at high temperatures by following a non-linear behaviour. The application of logistic and Boltzmann models leads to good fittings to the experimental results, despite the Boltzmann model provided the best approach to calculate the time at which 50% of PLA was degraded. In conclusion, the Boltzmann method can be applied as a tool for simulating the PLA thermal degradation.
Resumo:
By switching the level of analysis and aggregating data from the micro-level of individual cases to the macro-level, quantitative data can be analysed within a more case-based approach. This paper presents such an approach in two steps: In a first step, it discusses the combination of Social Network Analysis (SNA) and Qualitative Comparative Analysis (QCA) in a sequential mixed-methods research design. In such a design, quantitative social network data on individual cases and their relations at the micro-level are used to describe the structure of the network that these cases constitute at the macro-level. Different network structures can then be compared by QCA. This strategy allows adding an element of potential causal explanation to SNA, while SNA-indicators allow for a systematic description of the cases to be compared by QCA. Because mixing methods can be a promising, but also a risky endeavour, the methodological part also discusses the possibility that underlying assumptions of both methods could clash. In a second step, the research design presented beforehand is applied to an empirical study of policy network structures in Swiss politics. Through a comparison of 11 policy networks, causal paths that lead to a conflictual or consensual policy network structure are identified and discussed. The analysis reveals that different theoretical factors matter and that multiple conjunctural causation is at work. Based on both the methodological discussion and the empirical application, it appears that a combination of SNA and QCA can represent a helpful methodological design for social science research and a possibility of using quantitative data with a more case-based approach.
Resumo:
We consider a buying-selling problem when two stops of a sequence of independent random variables are required. An optimal stopping rule and the value of a game are obtained.
Resumo:
A strategy for the production and subsequent characterization of biofunctionalized silica particles is presented. The particles were engineered to produce a bifunctional material capable of both (a) the attachment of fluorescent dyes for particle encoding and (b) the sequential modification of the surface of the particles to couple oligonucleotide probes. A combination of microscopic and analytical methods is implemented to demonstrate that modification of the particles with 3-aminopropyl trimethoxysilane results in an even distribution of amine groups across the particle surface. Evidence is provided to indicate that there are negligible interactions between the bound fluorescent dyes and the attached biomolecules. A unique approach was adopted to provide direct quantification of the oligonucleotide probe loading on the particle surface through X-ray photoelectron spectroscopy, a technique which may have a major impact for current researchers and users of bead-based technologies. A simple hybridization assay showing high sequence specificity is included to demonstrate the applicability of these particles to DNA screening.
Resumo:
A program can be decomposed into a set of possible execution paths. These can be described in terms of primitives such as assignments, assumptions and coercions, and composition operators such as sequential composition and nondeterministic choice as well as finitely or infinitely iterated sequential composition. Some of these paths cannot possibly be followed (they are dead or infeasible), and they may or may not terminate. Decomposing programs into paths provides a foundation for analyzing properties of programs. Our motivation is timing constraint analysis of real-time programs, but the same techniques can be applied in other areas such as program testing. In general the set of execution paths for a program is infinite. For timing analysis we would like to decompose a program into a finite set of subpaths that covers all possible execution paths, in the sense that we only have to analyze the subpaths in order to determine suitable timing constraints that cover all execution paths.
Resumo:
The main aim of the work is to investigate sequential pyrolysis of willow SRC using two different heating rates (25 and 1500 °C/min) between 320 and 520 °C. Thermogravimetric analysis (TGA) and pyrolysis - gas chromatography - mass spectroscopy (Py-GC-MS) have been used for this analysis. In addition, laboratory scale processing has been undertaken to compare product distribution from fast and slow pyrolysis at 500 °C. Fast pyrolysis was carried out using a 1 kg/h continuous bubbling fluidized bed reactor, and slow pyrolysis using a 100 g batch reactor. Findings from this study show that heating rate and pyrolysis temperatures have a significant influence on the chemical content of decomposition products. From the analytical sequential pyrolysis, an inverse relationship was seen between the total yield of furfural (at high heating rates) and 2-furanmethanol (at low heating rates). The total yield of 1,2-dihydroxybenzene (catechol) was found to be significant higher at low heating rates. The intermediates of catechol, 2-methoxy-4-(2-propenyl)phenol (eugenol); 2-methoxyphenol (guaiacol); 4-Hydroxy-3,5-dimethoxybenzaldehyde (syringaldehyde) and 4-hydroxy-3-methoxybenzaldehyde (vanillin), were found to be highest at high heating rates. It was also found that laboratory scale processing alters the pyrolysis bio-oil chemical composition, and the proportions of pyrolysis product yields. The GC-MS/FID analysis of fast and slow pyrolysis bio-oils reveals significant differences. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Grafting of antioxidants and other modifiers onto polymers by reactive extrusion, has been performed successfully by the Polymer Processing and Performance Group at Aston University. Traditionally the optimum conditions for the grafting process have been established within a Brabender internal mixer. Transfer of this batch process to a continuous processor, such as an extruder, has, typically, been empirical. To have more confidence in the success of direct transfer of the process requires knowledge of, and comparison between, residence times, mixing intensities, shear rates and flow regimes in the internal mixer and in the continuous processor.The continuous processor chosen for the current work in the closely intermeshing, co-rotating twin-screw extruder (CICo-TSE). CICo-TSEs contain screw elements that convey material with a self-wiping action and are widely used for polymer compounding and blending. Of the different mixing modules contained within the CICo-TSE, the trilobal elements, which impose intensive mixing, and the mixing discs, which impose extensive mixing, are of importance when establishing the intensity of mixing. In this thesis, the flow patterns within the various regions of the single-flighted conveying screw elements and within both the trilobal element and mixing disc zones of a Betol BTS40 CICo-TSE, have been modelled using the computational fluid dynamics package Polyflow. A major obstacle encountered when solving the flow problem within all of these sets of elements, arises from both the complex geometry and the time-dependent flow boundaries as the elements rotate about their fixed axes. Simulation of the time dependent boundaries was overcome by selecting a number of sequential 2D and 3D geometries, used to represent partial mixing cycles. The flow fields were simulated using the ideal rheological properties of polypropylene and characterised in terms of velocity vectors, shear stresses generated and a parameter known as the mixing efficiency. The majority of the large 3D simulations were performed on the Cray J90 supercomputer situated at the Rutherford-Appleton laboratories, with pre- and postprocessing operations achieved via a Silicon Graphics Indy workstation. A mechanical model was constructed consisting of various CICo-TSE elements rotating within a transparent outer barrel. A technique has been developed using coloured viscous clays whereby the flow patterns and mixing characteristics within the CICo-TSE may be visualised. In order to test and verify the simulated predictions, the patterns observed within the mechanical model were compared with the flow patterns predicted by the computational model. The flow patterns within the single-flighted conveying screw elements in particular, showed good agreement between the experimental and simulated results.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.
Resumo:
The reliability of the printed circuit board assembly under dynamic environments, such as those found onboard airplanes, ships and land vehicles is receiving more attention. This research analyses the dynamic characteristics of the printed circuit board (PCB) supported by edge retainers and plug-in connectors. By modelling the wedge retainer and connector as providing simply supported boundary condition with appropriate rotational spring stiffnesses along their respective edges with the aid of finite element codes, accurate natural frequencies for the board against experimental natural frequencies are obtained. For a PCB supported by two opposite wedge retainers and a plug-in connector and with its remaining edge free of any restraint, it is found that these real supports behave somewhere between the simply supported and clamped boundary conditions and provide a percentage fixity of 39.5% more than the classical simply supported case. By using an eigensensitivity method, the rotational stiffnesses representing the boundary supports of the PCB can be updated effectively and is capable of representing the dynamics of the PCB accurately. The result shows that the percentage error in the fundamental frequency of the PCB finite element model is substantially reduced from 22.3% to 1.3%. The procedure demonstrated the effectiveness of using only the vibration test frequencies as reference data when the mode shapes of the original untuned model are almost identical to the referenced modes/experimental data. When using only modal frequencies in model improvement, the analysis is very much simplified. Furthermore, the time taken to obtain the experimental data will be substantially reduced as the experimental mode shapes are not required.In addition, this thesis advocates a relatively simple method in determining the support locations for maximising the fundamental frequency of vibrating structures. The technique is simple and does not require any optimisation or sequential search algorithm in the analysis. The key to the procedure is to position the necessary supports at positions so as to eliminate the lower modes from the original configuration. This is accomplished by introducing point supports along the nodal lines of the highest possible mode from the original configuration, so that all the other lower modes are eliminated by the introduction of the new or extra supports to the structure. It also proposes inspecting the average driving point residues along the nodal lines of vibrating plates to find the optimal locations of the supports. Numerical examples are provided to demonstrate its validity. By applying to the PCB supported on its three sides by two wedge retainers and a connector, it is found that a single point constraint that would yield maximum fundamental frequency is located at the mid-point of the nodal line, namely, node 39. This point support has the effect of increasing the structure's fundamental frequency from 68.4 Hz to 146.9 Hz, or 115% higher.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
An essential stage in endocytic coated vesicle recycling is the dissociation of clathrin from the vesicle coat by the molecular chaperone, 70-kDa heat-shock cognate protein (Hsc70), and the J-domain-containing protein, auxilin, in an ATP-dependent process. We present a detailed mechanistic analysis of clathrin disassembly catalyzed by Hsc70 and auxilin, using loss of perpendicular light scattering to monitor the process. We report that a single auxilin per clathrin triskelion is required for maximal rate of disassembly, that ATP is hydrolyzed at the same rate that disassembly occurs, and that three ATP molecules are hydrolyzed per clathrin triskelion released. Stopped-flow measurements revealed a lag phase in which the scattering intensity increased owing to association of Hsc70 with clathrin cages followed by serial rounds of ATP hydrolysis prior to triskelion removal. Global fit of stopped-flow data to several physically plausible mechanisms showed the best fit to a model in which sequential hydrolysis of three separate ATP molecules is required for the eventual release of a triskelion from the clathrin-auxilin cage.
Resumo:
Transition P Systems are a parallel and distributed computational model based on the notion of the cellular membrane structure. Each membrane determines a region that encloses a multiset of objects and evolution rules. Transition P Systems evolve through transitions between two consecutive configurations that are determined by the membrane structure and multisets present inside membranes. Moreover, transitions between two consecutive configurations are provided by an exhaustive non-deterministic and parallel application of evolution rules. But, to establish the rules to be applied, it is required the previous calculation of useful, applicable and active rules. Hence, computation of useful evolution rules is critical for the whole evolution process efficiency, because it is performed in parallel inside each membrane in every evolution step. This work defines usefulness states through an exhaustive analysis of the P system for every membrane and for every possible configuration of the membrane structure during the computation. Moreover, this analysis can be done in a static way; therefore membranes only have to check their usefulness states to obtain their set of useful rules during execution.
Resumo:
Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^
Resumo:
The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.