997 resultados para phase-error
Resumo:
ABSTRACT: The present work uses multivariate statistical analysis as a form of establishing the main sources of error in the Quantitative Phase Analysis (QPA) using the Rietveld method. The quantitative determination of crystalline phases using x ray powder diffraction is a complex measurement process whose results are influenced by several factors. Ternary mixtures of Al2O3, MgO and NiO were prepared under controlled conditions and the diffractions were obtained using the Bragg-Brentano geometric arrangement. It was possible to establish four sources of critical variations: the experimental absorption and the scale factor of NiO, which is the phase with the greatest linear absorption coefficient of the ternary mixture; the instrumental characteristics represented by mechanical errors of the goniometer and sample displacement; the other two phases (Al2O3 and MgO); and the temperature and relative humidity of the air in the laboratory. The error sources excessively impair the QPA with the Rietveld method. Therefore it becomes necessary to control them during the measurement procedure.
Resumo:
In this paper, we perform a thorough analysis of a spectral phase-encoded time spreading optical code division multiple access (SPECTS-OCDMA) system based on Walsh-Hadamard (W-H) codes aiming not only at finding optimal code-set selections but also at assessing its loss of security due to crosstalk. We prove that an inadequate choice of codes can make the crosstalk between active users to become large enough so as to cause the data from the user of interest to be detected by other user. The proposed algorithm for code optimization targets code sets that produce minimum bit error rate (BER) among all codes for a specific number of simultaneous users. This methodology allows us to find optimal code sets for any OCDMA system, regardless the code family used and the number of active users. This procedure is crucial for circumventing the unexpected lack of security due to crosstalk. We also show that a SPECTS-OCDMA system based on W-H 32(64) fundamentally limits the number of simultaneous users to 4(8) with no security violation due to crosstalk. More importantly, we prove that only a small fraction of the available code sets is actually immune to crosstalk with acceptable BER (<10(-9)) i.e., approximately 0.5% for W-H 32 with four simultaneous users, and about 1 x 10(-4)% for W-H 64 with eight simultaneous users.
Resumo:
Experimental two-phase frictional pressure drop and flow boiling heat transfer results are presented for a horizontal 2.32-mm ID stainless-steel tube using R245fa as working fluid. The frictional pressure drop data was obtained under adiabatic and diabatic conditions. Experiments were performed for mass velocities ranging from 100 to 700 kg m−2 s−1 , heat flux from 0 to 55 kW m−2 , exit saturation temperatures of 31 and 41◦C, and vapor qualities from 0.10 to 0.99. Pressures drop gradients and heat transfer coefficients ranging from 1 to 70 kPa m−1 and from 1 to 7 kW m−2 K−1 were measured. It was found that the heat transfer coefficient is a strong function of the heat flux, mass velocity, and vapor quality. Five frictional pressure drop predictive methods were compared against the experimental database. The Cioncolini et al. (2009) method was found to work the best. Six flow boiling heat transfer predictive methods were also compared against the present database. Liu and Winterton (1991), Zhang et al. (2004), and Saitoh et al. (2007) were ranked as the best methods. They predicted the experimental flow boiling heat transfer data with an average error around 19%.
Resumo:
In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.
Resumo:
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
Resumo:
PURPOSE: We assessed the safety of the multikinase inhibitor regorafenib in patients with hepatocellular carcinoma (HCC) that had progressed following first-line sorafenib. PATIENTS AND METHODS: Thirty-six patients with Barcelona Clinic Liver Cancer stage B or C HCC and preserved to mildly impaired liver function (Child-Pugh class A) received regorafenib 160 mg once daily in cycles of 3 weeks on/1 week off treatment until disease progression, unacceptable toxicity, death or patient/physician decision to discontinue. The primary end-point was safety; secondary end-points included efficacy (including time to progression and overall survival). RESULTS: The median treatment duration was 19.5 weeks (range 2-103). At data cutoff, three patients remained on treatment. Reasons for discontinuation were adverse events (n=20), disease progression (n=10), consent withdrawal (n=2) and death (n=1). Seventeen patients required dose reductions (mostly for adverse events [n=15]); 35 patients had treatment interruption (mostly for adverse events [n=32] or patient error [n=11]). The most frequent treatment-related adverse events were hand-foot skin reaction (any grade n=19; grade ≥3 n=5), diarrhoea (n=19; n=2), fatigue (n=19; n=6), hypothyroidism (n=15; n=0), anorexia (n=13; n=0), hypertension (n=13; n=1), nausea (n=12; n=0) and voice changes (n=10; n=0). Disease control was achieved in 26 patients (partial response n=1; stable disease n=25). Median time to progression was 4.3 months. Median overall survival was 13.8 months. CONCLUSION: Regorafenib had acceptable tolerability and evidence of antitumour activity in patients with intermediate or advanced HCC that progressed following first-line sorafenib.
Resumo:
There are two practical challenges in the phase I clinical trial conduct: lack of transparency to physicians, and the late onset toxicity. In my dissertation, Bayesian approaches are used to address these two problems in clinical trial designs. The proposed simple optimal designs cast the dose finding problem as a decision making process for dose escalation and deescalation. The proposed designs minimize the incorrect decision error rate to find the maximum tolerated dose (MTD). For the late onset toxicity problem, a Bayesian adaptive dose-finding design for drug combination is proposed. The dose-toxicity relationship is modeled using the Finney model. The unobserved delayed toxicity outcomes are treated as missing data and Bayesian data augment is employed to handle the resulting missing data. Extensive simulation studies have been conducted to examine the operating characteristics of the proposed designs and demonstrated the designs' good performances in various practical scenarios.^
Resumo:
Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^
Resumo:
Rms voltage regulation may be an attractive possibility for controlling power inverters. Combined with a Hall Effect sensor for current control, it keeps its parallel operation capability while increasing its noise immunity, which may lead to a reduction of the Total Harmonic Distortion (THD). Besides, as voltage regulation is designed in DC, a simple PI regulator can provide accurate voltage tracking. Nevertheless, this approach does not lack drawbacks. Its narrow voltage bandwidth makes transients last longer and it increases the voltage THD when feeding non-linear loads, such as rectifying stages. On the other hand, the implementation can fall into offset voltage error. Furthermore, the information of the output voltage phase is hidden for the control as well, making the synchronization of a 3-phase setup not trivial. This paper explains the concept, design and implementation of the whole control scheme, in an on board inverter able to run in parallel and within a 3-phase setup. Special attention is paid to solve the problems foreseen at implementation level: a third analog loop accounts for the offset level is added and a digital algorithm guarantees 3-phase voltage synchronization.
Resumo:
Several microbial systems have been shown to yield advantageous mutations in slowly growing or nongrowing cultures. In one assay system, the stationary-phase mutation mechanism differs from growth-dependent mutation, demonstrating that the two are different processes. This system assays reversion of a lac frameshift allele on an F′ plasmid in Escherichia coli. The stationary-phase mutation mechanism at lac requires recombination proteins of the RecBCD double-strand-break repair system and the inducible error-prone DNA polymerase IV, and the mutations are mostly −1 deletions in small mononucleotide repeats. This mutation mechanism is proposed to occur by DNA polymerase errors made during replication primed by recombinational double-strand-break repair. It has been suggested that this mechanism is confined to the F plasmid. However, the cells that acquire the adaptive mutations show hypermutation of unrelated chromosomal genes, suggesting that chromosomal sites also might experience recombination protein-dependent stationary-phase mutation. Here we test directly whether the stationary-phase mutations in the bacterial chromosome also occur via a recombination protein- and pol IV-dependent mechanism. We describe an assay for chromosomal mutation in cells carrying the F′ lac. We show that the chromosomal mutation is recombination protein- and pol IV-dependent and also is associated with general hypermutation. The data indicate that, at least in these male cells, recombination protein-dependent stationary-phase mutation is a mechanism of general inducible genetic change capable of affecting genes in the bacterial chromosome.
Resumo:
"Georgia Institute of Technology."
Resumo:
Error rates of a Boolean perceptron with threshold and either spherical or Ising constraint on the weight vector are calculated for storing patterns from biased input and output distributions derived within a one-step replica symmetry breaking (RSB) treatment. For unbiased output distribution and non-zero stability of the patterns, we find a critical load, α p, above which two solutions to the saddlepoint equations appear; one with higher free energy and zero threshold and a dominant solution with non-zero threshold. We examine this second-order phase transition and the dependence of α p on the required pattern stability, κ, for both one-step RSB and replica symmetry (RS) in the spherical case and for one-step RSB in the Ising case.
Resumo:
An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.
Resumo:
We analyse Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as a minimization of a proper free-energy. We find a thermodynamical phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape.