965 resultados para The bilinear method
Resumo:
Biological markers for the status of vitamins B12 and D: the importance of some analytical aspects in relation to clinical interpretation of results When vitamin B12 deficiency is expressed clinically, the diagnostic performance of total cobalamin is identical to that of holotranscobalamin II. In subclinical B12 deficiency, the two aforementioned markers perform less well. Additional analysis of a second, functional marker (methylmalonate or homocysteine) is recommended. Different analytical approaches for 25-hydroxyvitamin D quantification, the marker of vitamin D deficiency, are not yet standardized. Measurement biases of up to +/- 20% compared with the original method used to establish threshold values are still observed.
Resumo:
Abstract This thesis presents three empirical studies in the field of health insurance in Switzerland. First we investigate the link between health insurance coverage and health care expenditures. We use claims data for over 60 000 adult individuals covered by a major Swiss Health Insurance Fund, followed for four years; the data show a strong positive correlation between coverage and expenditures. Two methods are developed and estimated in order to separate selection effects (due to individual choice of coverage) and incentive effects ("ex post moral hazard"). The first method uses the comparison between inpatient and outpatient expenditures to identify both effects and we conclude that both selection and incentive effects are significantly present in our data. The second method is based on a structural model of joint demand of health care and health insurance and makes the most of the change in the marginal cost of health care to identify selection and incentive effects. We conclude that the correlation between insurance coverage and health care expenditures may be decomposed into the two effects: 75% may be attributed to selection, and 25 % to incentive effects. Moreover, we estimate that a decrease in the coinsurance rate from 100% to 10% increases the marginal demand for health care by about 90% and from 100% to 0% by about 150%. Secondly, having shown that selection and incentive effects exist in the Swiss health insurance market, we present the consequence of this result in the context of risk adjustment. We show that if individuals choose their insurance coverage in function of their health status (selection effect), the optimal compensations should be function of the se- lection and incentive effects. Therefore, a risk adjustment mechanism which ignores these effects, as it is the case presently in Switzerland, will miss his main goal to eliminate incentives for sickness funds to select risks. Using a simplified model, we show that the optimal compensations have to take into account the distribution of risks through the insurance plans in case of self-selection in order to avoid incentives to select risks.Then, we apply our propositions to Swiss data and propose a simple econometric procedure to control for self-selection in the estimation of the risk adjustment formula in order to compute the optimal compensations.
Resumo:
The least limiting water range (LLWR) has been used as an indicator of soil physical quality as it represents, in a single parameter, the soil physical properties directly linked to plant growth, with the exception of temperature. The usual procedure for obtaining the LLWR involves determination of the water retention curve (WRC) and the soil resistance to penetration curve (SRC) in soil samples with undisturbed structure in the laboratory. Determination of the WRC and SRC using field measurements (in situ ) is preferable, but requires appropriate instrumentation. The objective of this study was to determine the LLWR from the data collected for determination of WRC and SRC in situ using portable electronic instruments, and to compare those determinations with the ones made in the laboratory. Samples were taken from the 0.0-0.1 m layer of a Latossolo Vermelho distrófico (Oxisol). Two methods were used for quantification of the LLWR: the traditional, with measurements made in soil samples with undisturbed structure; and in situ , with measurements of water content (θ), soil water potential (Ψ), and soil resistance to penetration (SR) through the use of sensors. The in situ measurements of θ, Ψ and SR were taken over a period of four days of soil drying. At the same time, samples with undisturbed structure were collected for determination of bulk density (BD). Due to the limitations of measurement of Ψ by tensiometer, additional determinations of θ were made with a psychrometer (in the laboratory) at the Ψ of -1500 kPa. The results show that it is possible to determine the LLWR by the θ, Ψ and SR measurements using the suggested approach and instrumentation. The quality of fit of the SRC was similar in both strategies. In contrast, the θ and Ψ in situ measurements, associated with those measured with a psychrometer, produced a better WRC description. The estimates of the LLWR were similar in both methodological strategies. The quantification of LLWR in situ can be achieved in 10 % of the time required for the traditional method.
Resumo:
Odorant receptor (OR) genes constitute with 1200 members the largest gene family in the mouse genome. A mature olfactory sensory neuron (OSN) is thought to express just one OR gene, and from one allele. The cell bodies of OSNs that express a given OR gene display a mosaic pattern within a particular region of the main olfactory epithelium. The mechanisms and cis-acting DNA elements that regulate the expression of one OR gene per OSN - OR gene choice - remain poorly understood. Here, we describe a reporter assay to identify minimal promoters for OR genes in transgenic mice, which are produced by the conventional method of pronuclear injection of DNA. The promoter transgenes are devoid of an OR coding sequence, and instead drive expression of the axonal marker tau-β-galactosidase. For four mouse OR genes (M71, M72, MOR23, and P3) and one human OR gene (hM72), a mosaic, OSN-specific pattern of reporter expression can be obtained in transgenic mice with contiguous DNA segments of only ~300 bp that are centered around the transcription start site (TSS). The ~150bp region upstream of the TSS contains three conserved sequence motifs, including homeodomain (HD) binding sites. Such HD binding sites are also present in the H and P elements, DNA sequences that are known to strongly influence OR gene expression. When a 19mer encompassing a HD binding site from the P element is multimerized nine times and added upstream of a MOR23 minigene that contains the MOR23 coding region, we observe a dramatic increase in the number of transgene-expressing founders and lines and in the number of labeled OSNs. By contrast, a nine times multimerized 19mer with a mutant HD binding site does not have these effects. We hypothesize that HD binding sites in the H and P elements and in OR promoters modulate the probability of OR gene choice.
Resumo:
The Organization of the Thesis The remainder of the thesis comprises five chapters and a conclusion. The next chapter formalizes the envisioned theory into a tractable model. Section 2.2 presents a formal description of the model economy: the individual heterogeneity, the individual objective, the UI setting, the population dynamics and the equilibrium. The welfare and efficiency criteria for qualifying various equilibrium outcomes are proposed in section 2.3. The fourth section shows how the model-generated information can be computed. Chapter 3 transposes the model from chapter 2 in conditions that enable its use in the analysis of individual labor market strategies and their implications for the labor market equilibrium. In section 3.2 the Swiss labor market data sets, stylized facts, and the UI system are presented. The third section outlines and motivates the parameterization method. In section 3.4 the model's replication ability is evaluated and some aspects of the parameter choice are discussed. Numerical solution issues can be found in the appendix. Chapter 4 examines the determinants of search-strategic behavior in the model economy and its implications for the labor market aggregates. In section 4.2, the unemployment duration distribution is examined and related to search strategies. Section 4.3 shows how the search- strategic behavior is influenced by the UI eligibility and section 4.4 how it is determined by individual heterogeneity. The composition effects generated by search strategies in labor market aggregates are examined in section 4.5. The last section evaluates the model's replication of empirical unemployment escape frequencies reported in Sheldon [67]. Chapter 5 applies the model economy to examine the effects on the labor market equilibrium of shocks to the labor market risk structure, to the deep underlying labor market structure and to the UI setting. Section 5.2 examines the effects of the labor market risk structure on the labor market equilibrium and the labor market strategic behavior. The effects of alterations in the labor market deep economic structural parameters, i.e. individual preferences and production technology, are shown in Section 5.3. Finally, the UI setting impacts on the labor market are studied in Section 5.4. This section also evaluates the role of the UI authority monitoring and the differences in the Way changes in the replacement rate and the UI benefit duration affect the labor market. In chapter 6 the model economy is applied in counterfactual experiments to assess several aspects of the Swiss labor market movements in the nineties. Section 6.2 examines the two equilibria characterizing the Swiss labor market in the nineties, the " growth" equilibrium with a "moderate" UI regime and the "recession" equilibrium with a more "generous" UI. Section 6.3 evaluates the isolated effects of the structural shocks, while the isolated effects of the UI reforms are analyzed in section 6.4. Particular dimensions of the UI reforms, the duration, replacement rate and the tax rate effects, are studied in section 6.5, while labor market equilibria without benefits are evaluated in section 6.6. In section 6.7 the structural and institutional interactions that may act as unemployment amplifiers are discussed in view of the obtained results. A welfare analysis based on individual welfare in different structural and UI settings is presented in the eighth section. Finally, the results are related to more favorable unemployment trends after 1997. The conclusion evaluates the features embodied in the model economy with respect to the resulting model dynamics to derive lessons from the model design." The thesis ends by proposing guidelines for future improvements of the model and directions for further research.
Resumo:
OBJECTIVE: The purpose of the present study was to submit the same materials that were tested in the round robin wear test of 2002/2003 to the Alabama wear method. METHODS: Nine restorative materials, seven composites (belleGlass, Chromasit, Estenia, Heliomolar, SureFil, Targis, Tetric Ceram) an amalgam (Amalcap) and a ceramic (IPS Empress) have been submitted to the Alabama wear method for localized and generalized wear. The test centre did not know which brand they were testing. Both volumetric and vertical loss had been determined with an optical sensor. After completion of the wear test, the raw data were sent to IVOCLAR for further analysis. The statistical analysis of the data included logarithmic transformation of the data, the calculation of relative ranks of each material within each test centre, measures of agreement between methods, the discrimination power and coefficient of variation of each method as well as measures of the consistency and global performance for each material. RESULTS: Relative ranks of the materials varied tremendously between the test centres. When all materials were taken into account and the test methods compared with each other, only ACTA agreed reasonably well with two other methods, i.e. OHSU and ZURICH. On the other hand, MUNICH did not agree with the other methods at all. The ZURICH method showed the lowest discrimination power, ACTA, IVOCLAR and ALABAMA localized the highest. Material-wise, the best global performance was achieved by the leucite reinforced ceramic material Empress, which was clearly ahead of belleGlass, SureFil and Estenia. In contrast, Heliomolar, Tetric Ceram and especially Chromasit demonstrated a poor global performance. The best consistency was achieved by SureFil, Tetric Ceram and Chromasit, whereas the consistency of Amalcap and Heliomolar was poor. When comparing the laboratory data with clinical data, a significant agreement was found for the IVOCLAR and ALABAMA generalized wear method. SIGNIFICANCE: As the different wear simulator settings measure different wear mechanisms, it seems reasonable to combine at least two different wear settings to assess the wear resistance of a new material.
Resumo:
A 6008 base pair fragment of the vaccinia virus DNA containing the gene for the precursor of the major core protein 4 a, which has been designated P4 a, was sequenced. A long open reading frame (ORF) encoding a protein of molecular weight 102,157 started close to the position where the P4 a mRNA had been mapped. Analysis of the mRNA by S1 nuclease mapping and primer extension indicated that the 5' end defined by the former method is not the true 5' end. This suggests that the P4 a coding region is preceded by leader sequences that are not derived from the immediate vicinity of the gene, similar to what has been reported for another late vaccinia virus mRNA. The sequenced DNA contained several further ORFs on the same, or opposite DNA strand, providing further evidence for the close spacing of protein-coding sequences in the viral genome.
Resumo:
ABSTRACT Groundwater management depends on the knowledge on recharge rates and water fluxes within aquifers. The recharge is one of the water cycle components most difficult to estimate. As a result, despite the chosen method, the estimates are subject to uncertainties that can be identified by means of comparison with other approaches. In this study, groundwater recharge estimates based on the water balance in the unsaturated zone is assessed. Firstly, the approach is evaluated by comparing the results with those of another method. Then, the estimates are used as inputs in a transient groundwater flow model in order to assess how the water table would respond to the obtained recharges rates compared to measured levels. The results suggest a good performance of the adopted approach and, despite some inherent limitations, it has advantages over other methods since the data required are easier to obtain.
Resumo:
ABSTRACT Univariate methods for diagnosing nutritional status such as the sufficiency range and the critical level for garlic crops are very susceptible to the effects of dilution and accumulation of nutrients. Therefore, this study aimed to establish bivariate and multivariate norms for this crop using the Diagnosis and Recommendation Integrated System (DRIS) and Nutritional Composition Diagnosis (CND), respectively. The criteria used were nutritional status and the sufficiency range, and then the diagnoses were compared. The study was performed in the region of Alto Paranaíba, MG, Brazil, during the crop seasons 2012 and 2013. Samples comprised 99 commercial fields of garlic, cultivated with the cultivar “Ito” and mostly established in Latossolo Vermelho-Amarelo Distrófico (Oxisol). Copper and K were the nutrients with the highest number of fields diagnosed as limiting by lack (LF) and limiting by excess (LE), respectively. The DRIS method presented greater tendency to diagnose LF, while the CND tended towards LE. The sufficiency range of both methods presented narrow ranges in relation to those suggested by the literature. Moreover, all ranges produced by the CND method provided narrower ranges than the DRIS method. The CND method showed better performance than DRIS in distinguishing crop yield covered by different diagnoses. Turning to the criterion of evaluation, the study found that nutritional status gave a better performance than sufficiency range in terms of distinguishing diagnoses regarding yield.
Resumo:
Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.
Resumo:
Little information is currently available from the various societies of cardiology on primary percutaneous coronary intervention (PCI) for acute myocardial infarction (AMI). Since primary PCI is the main method of reperfusion in AMI in many centres, and since of all cardiac emergencies AMI represents the most urgent situation for PCI, recommendations based on scientific evidence and expert experience would be useful for centres practising primary PCI, or those looking to establish a primary PCI programme. To this aim, a task force for primary PCI in AMI was formed to develop a set of recommendations to complement and assist clinical judgment. This paper represents the product of their recommendations.
Resumo:
Oxalic and oxamic acids are the ultimate and more persistent by-products of the degradation of N-aromatics by electrochemical advanced oxidation processes (EAOPs). In this paper, the kinetics and oxidative paths of these acids have been studied for several EAOPs using a boron-doped diamond (BDD) anode and a stainless steel or an air-diffusion cathode. Anodic oxidation (AO-BDD) in the presence of Fe2+ (AO-BDD-Fe2+) and under UVA irradiation (AO-BDD-Fe2+-UVA), along with electro-Fenton (EF-BDD), was tested. The oxidation of both acids and their iron complexes on BDD was clarified by cyclic voltammetry. AO-BDD allowed the overall mineralization of oxalic acid, but oxamic acid was removed much more slowly. Each acid underwent a similar decay in AO-BDD-Fe2+ and EFBDD, as expected if its iron complexes were not attacked by hydroxyl radicals in the bulk. The faster and total mineralization of both acids was achieved in AO-BDD-Fe2+-UVA due to the high photoactivity of their Fe(III) complexes that were continuously regenerated by oxidation of their Fe(II) complexes. Oxamic acid always released a larger proportion of NH4 + than NO3- ion, as well as volatile NOx species. Both acids were independently oxidized at the anode in AO-BDD, but in AO-BDD-Fe2+-UVA oxamic acid was more slowlydegraded as its content decreased, without significant effect on oxalic acid decay. The increase in current density enhanced the oxidation power of the latter method, with loss of efficiency. High Fe2+ contents inhibited the oxidation of Fe(II) complexes by the competitive oxidation of Fe2+ to Fe3+. Low current densities and Fe2+ contents are preferable to remove more efficiently these acids by the most potent AO-BDD-Fe2+-UVA method.
Resumo:
The Multiscale Finite Volume (MsFV) method has been developed to efficiently solve reservoir-scale problems while conserving fine-scale details. The method employs two grid levels: a fine grid and a coarse grid. The latter is used to calculate a coarse solution to the original problem, which is interpolated to the fine mesh. The coarse system is constructed from the fine-scale problem using restriction and prolongation operators that are obtained by introducing appropriate localization assumptions. Through a successive reconstruction step, the MsFV method is able to provide an approximate, but fully conservative fine-scale velocity field. For very large problems (e.g. one billion cell model), a two-level algorithm can remain computational expensive. Depending on the upscaling factor, the computational expense comes either from the costs associated with the solution of the coarse problem or from the construction of the local interpolators (basis functions). To ensure numerical efficiency in the former case, the MsFV concept can be reapplied to the coarse problem, leading to a new, coarser level of discretization. One challenge in the use of a multilevel MsFV technique is to find an efficient reconstruction step to obtain a conservative fine-scale velocity field. In this work, we introduce a three-level Multiscale Finite Volume method (MlMsFV) and give a detailed description of the reconstruction step. Complexity analyses of the original MsFV method and the new MlMsFV method are discussed, and their performances in terms of accuracy and efficiency are compared.
Resumo:
High performance liquid chromatography (HPLC) is the reference method for measuring concentrations of antimicrobials in blood. This technique requires careful sample preparation. Protocols using organic solvents and/or solid extraction phases are time consuming and entail several manipulations, which can lead to partial loss of the determined compound and increased analytical variability. Moreover, to obtain sufficient material for analysis, at least 1 ml of plasma is required. This constraint makes it difficult to determine drug levels when blood sample volumes are limited. However, drugs with low plasma-protein binding can be reliably extracted from plasma by ultra-filtration with a minimal loss due to the protein-bound fraction. This study validated a single-step ultra-filtration method for extracting fluconazole (FLC), a first-line antifungal agent with a weak plasma-protein binding, from plasma to determine its concentration by HPLC. Spiked FLC standards and unknowns were prepared in human and rat plasma. Samples (240 microl) were transferred into disposable microtube filtration units containing cellulose or polysulfone filters with a 5 kDa cut-off. After centrifugation for 60 min at 15000g, FLC concentrations were measured by direct injection of the filtrate into the HPLC. Using cellulose filters, low molecular weight proteins were eluted early in the chromatogram and well separated from FLC that eluted at 8.40 min as a sharp single peak. In contrast, with polysulfone filters several additional peaks interfering with the FLC peak were observed. Moreover, the FLC recovery using cellulose filters compared to polysulfone filters was higher and had a better reproducibility. Cellulose filters were therefore used for the subsequent validation procedure. The quantification limit was 0.195 mgl(-1). Standard curves with a quadratic regression coefficient > or = 0.9999 were obtained in the concentration range of 0.195-100 mgl(-1). The inter and intra-run accuracies and precisions over the clinically relevant concentration range, 1.875-60 mgl(-1), fell well within the +/-15% variation recommended by the current guidelines for the validation of analytical methods. Furthermore, no analytical interference was observed with commonly used antibiotics, antifungals, antivirals and immunosuppressive agents. Ultra-filtration of plasma with cellulose filters permits the extraction of FLC from small volumes (240 microl). The determination of FLC concentrations by HPLC after this single-step procedure is selective, precise and accurate.
Resumo:
This study proposes a new concept for upscaling local information on failure surfaces derived from geophysical data, in order to develop the spatial information and quickly estimate the magnitude and intensity of a landslide. A new vision of seismic interpretation on landslides is also demonstrated by taking into account basic geomorphic information with a numeric method based on the Sloping Local Base Level (SLBL). The SLBL is a generalization of the base level defined in geomorphology applied to landslides, and allows the calculation of the potential geometry of the landslide failure surface. This approach was applied to a large scale landslide formed mainly in gypsum and situated in a former glacial valley along the Rhone within the Western European Alps. Previous studies identified the existence of two sliding surfaces that may continue below the level of the valley. In this study. seismic refraction-reflexion surveys were carried out to verify the existence of these failure surfaces. The analysis of the seismic data provides a four-layer model where three velocity layers (<1000 ms(-1), 1500 ms(-1) and 3000 ms(-1)) are interpreted as the mobilized mass at different weathering levels and compaction. The highest velocity layer (>4000 ms(-1)) with a maximum depth of similar to 58 m is interpreted as the stable anhydrite bedrock. Two failure surfaces were interpreted from the seismic surveys: an upper failure and a much deeper one (respectively 25 and 50 m deep). The upper failure surface depth deduced from geophysics is slightly different from the results obtained using the SLBL, and the deeper failure surface depth calculated with the SLBL method is underestimated in comparison with the geophysical interpretations. Optimal results were therefore obtained by including the seismic data in the SLBL calculations according to the geomorphic limits of the landslide (maximal volume of mobilized mass = 7.5 x 10(6) m(3)).