998 resultados para Anelastic relaxation methods
Resumo:
The State of Iowa currently has approximately 69,000 miles of unpaved secondary roads. Due to the low traffic count on these unpaved o nts as ng e two dust ed d roads, paving with asphalt or Portland cement concrete is not economical. Therefore to reduce dust production, the use of dust suppressants has been utilized for decades. This study was conducted to evaluate the effectiveness of several widely used dust suppressants through quantitative field testing on two of Iowa’s most widely used secondary road surface treatments: crushed limestone rock and alluvial sand/gravel. These commercially available dust suppressants included: lignin sulfonate, calcium chloride, and soybean oil soapstock. These suppressants were applied to 1000 ft test sections on four unpaved roads in Story County, Iowa. Tduplicate field conditions, the suppressants were applied as a surface spray once in early June and again in late August or early September. The four unpaved roads included two with crushed limestone rock and two with alluvial sand/gravel surface treatmewell as high and low traffic counts. The effectiveness of the dust suppressants was evaluated by comparing the dust produced on treated and untreated test sections. Dust collection was scheduled for 1, 2, 4, 6, and 8 weeks after each application, for a total testiperiod of 16 weeks. Results of a cost analysis between annual dust suppressant application and biennial aggregate replacement indicated that the cost of the dust suppressant, its transportation, and application were relatively high when compared to that of thaggregate types. Therefore, the biennial aggregate replacement is considered more economical than annual dust suppressant application, although the application of annual dust suppressant reduced the cost of road maintenance by 75 %. Results of thecollection indicated that the lignin sulfonate suppressant outperformed calcium chloride and soybean oil soapstock on all four unpavroads, the effect of the suppressants on the alluvial sand/gravel surface treatment was less than that on the crushed limestone rock, the residual effects of all the products seem reasonably well after blading, and the combination of alluvial sand/gravel surface treatment anhigh traffic count caused dust reduction to decrease dramatically.
Resumo:
The purpose of this research was to summarize existing nondestructive test methods that have the potential to be used to detect materials-related distress (MRD) in concrete pavements. The various nondestructive test methods were then subjected to selection criteria that helped to reduce the size of the list so that specific techniques could be investigated in more detail. The main test methods that were determined to be applicable to this study included two stress-wave propagation techniques (impact-echo and spectral analysis of surface waves techniques), infrared thermography, ground penetrating radar (GPR), and visual inspection. The GPR technique was selected for a preliminary round of “proof of concept” trials. GPR surveys were carried out over a variety of portland cement concrete pavements for this study using two different systems. One of the systems was a state-of-the-art GPR system that allowed data to be collected at highway speeds. The other system was a less sophisticated system that was commercially available. Surveys conducted with both sets of equipment have produced test results capable of identifying subsurface distress in two of the three sites that exhibited internal cracking due to MRD. Both systems failed to detect distress in a single pavement that exhibited extensive cracking. Both systems correctly indicated that the control pavement exhibited negligible evidence of distress. The initial positive results presented here indicate that a more thorough study (incorporating refinements to the system, data collection, and analysis) is needed. Improvements in the results will be dependent upon defining the optimum number and arrangement of GPR antennas to detect the most common problems in Iowa pavements. In addition, refining highfrequency antenna response characteristics will be a crucial step toward providing an optimum GPR system for detecting materialsrelated distress.
Resumo:
Analytical results harmonisation is investigated in this study to provide an alternative to the restrictive approach of analytical methods harmonisation which is recommended nowadays for making possible the exchange of information and then for supporting the fight against illicit drugs trafficking. Indeed, the main goal of this study is to demonstrate that a common database can be fed by a range of different analytical methods, whatever the differences in levels of analytical parameters between these latter ones. For this purpose, a methodology making possible the estimation and even the optimisation of results similarity coming from different analytical methods was then developed. In particular, the possibility to introduce chemical profiles obtained with Fast GC-FID in a GC-MS database is studied in this paper. By the use of the methodology, the similarity of results coming from different analytical methods can be objectively assessed and the utility in practice of database sharing by these methods can be evaluated, depending on profiling purposes (evidential vs. operational perspective tool). This methodology can be regarded as a relevant approach for database feeding by different analytical methods and puts in doubt the necessity to analyse all illicit drugs seizures in one single laboratory or to implement analytical methods harmonisation in each participating laboratory.
Resumo:
Voriconazole (VRC) is a broad-spectrum antifungal triazole with nonlinear pharmacokinetics. The utility of measurement of voriconazole blood levels for optimizing therapy is a matter of debate. Available high-performance liquid chromatography (HPLC) and bioassay methods are technically complex, time-consuming, or have a narrow analytical range. Objectives of the present study were to develop new, simple analytical methods and to assess variability of voriconazole blood levels in patients with invasive mycoses. Acetonitrile precipitation, reverse-phase separation, and UV detection were used for HPLC. A voriconazole-hypersusceptible Candida albicans mutant lacking multidrug efflux transporters (cdr1Delta/cdr1Delta, cdr2Delta/cdr2Delta, flu1Delta/flu1Delta, and mdr1Delta/mdr1Delta) and calcineurin subunit A (cnaDelta/cnaDelta) was used for bioassay. Mean intra-/interrun accuracies over the VRC concentration range from 0.25 to 16 mg/liter were 93.7% +/- 5.0%/96.5% +/- 2.4% (HPLC) and 94.9% +/- 6.1%/94.7% +/- 3.3% (bioassay). Mean intra-/interrun coefficients of variation were 5.2% +/- 1.5%/5.4% +/- 0.9% and 6.5% +/- 2.5%/4.0% +/- 1.6% for HPLC and bioassay, respectively. The coefficient of concordance between HPLC and bioassay was 0.96. Sequential measurements in 10 patients with invasive mycoses showed important inter- and intraindividual variations of estimated voriconazole area under the concentration-time curve (AUC): median, 43.9 mg x h/liter (range, 12.9 to 71.1) on the first and 27.4 mg x h/liter (range, 2.9 to 93.1) on the last day of therapy. During therapy, AUC decreased in five patients, increased in three, and remained unchanged in two. A toxic encephalopathy probably related to the increase of the VRC AUC (from 71.1 to 93.1 mg x h/liter) was observed. The VRC AUC decreased (from 12.9 to 2.9 mg x h/liter) in a patient with persistent signs of invasive aspergillosis. These preliminary observations suggest that voriconazole over- or underexposure resulting from variability of blood levels might have clinical implications. Simple HPLC and bioassay methods offer new tools for monitoring voriconazole therapy.
Resumo:
We evaluated 25 protocol variants of 14 independent computational methods for exon identification, transcript reconstruction and expression-level quantification from RNA-seq data. Our results show that most algorithms are able to identify discrete transcript components with high success rates but that assembly of complete isoform structures poses a major challenge even when all constituent elements are identified. Expression-level estimates also varied widely across methods, even when based on similar transcript models. Consequently, the complexity of higher eukaryotic genomes imposes severe limitations on transcript recall and splice product discrimination that are likely to remain limiting factors for the analysis of current-generation RNA-seq data.
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Deterioration in portland cement concrete (PCC) pavements can occur due to distresses caused by a combination of traffic loads and weather conditions. Hot mix asphalt (HMA) overlay is the most commonly used rehabilitation technique for such deteriorated PCC pavements. However, the performance of these HMA overlaid pavements is hindered due to the occurrence of reflective cracking, resulting in significant reduction of pavement serviceability. Various fractured slab techniques, including rubblization, crack and seat, and break and seat are used to minimize reflective cracking by reducing the slab action. However, the design of structural overlay thickness for cracked and seated and rubblized pavements is difficult as the resulting structure is neither a “true” rigid pavement nor a “true” flexible pavement. Existing design methodologies use the empirical procedures based on the AASHO Road Test conducted in 1961. But, the AASHO Road Test did not employ any fractured slab technique, and there are numerous limitations associated with extrapolating its results to HMA overlay thickness design for fractured PCC pavements. The main objective of this project is to develop a mechanistic-empirical (ME) design approach for the HMA overlay thickness design for fractured PCC pavements. In this design procedure, failure criteria such as the tensile strain at the bottom of HMA layer and the vertical compressive strain on the surface of subgrade are used to consider HMA fatigue and subgrade rutting, respectively. The developed ME design system is also implemented in a Visual Basic computer program. A partial validation of the design method with reference to an instrumented trial project (IA-141, Polk County) in Iowa is provided in this report. Tensile strain values at the bottom of the HMA layer collected from the FWD testing at this project site are in agreement with the results obtained from the developed computer program.
Resumo:
The members of the Iowa Concrete Paving Association, the National Concrete Pavement Technology Center Research Committee, and the Iowa Highway Research Board commissioned a study to examine alternative ways of developing transverse joints in portland cement concrete pavements. The present study investigated six separate variations of vertical metal strips placed above and below the dowels in conventional baskets. In addition, the study investigated existing patented assemblies and a new assembly developed in Spain and used in Australia. The metal assemblies were placed in a new pavement and allowed to stay in place for 30 days before the Iowa Department of Transportation staff terminated the test by directing the contractor to saw and seal the joints. This report describes the design, construction, testing, and conclusions of the project.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid (whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then the problem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Highway noise is one of the most pressing of the surface characteristics issues facing the concrete paving industry. This is particularly true in urban areas, where not only is there a higher population density near major thoroughfares, but also a greater volume of commuter traffic (Sandberg and Ejsmont 2002; van Keulen 2004). To help address this issue, the National Concrete Pavement Technology Center (CP Tech Center) at Iowa State University (ISU), Federal Highway Administration (FHWA), American Concrete Pavement Association (ACPA), and other organizations have partnered to conduct a multi-part, seven-year Concrete Pavement Surface Characteristics Project. This document contains the results of Part 1, Task 2, of the ISU-FHWA project, addressing the noise issue by evaluating conventional and innovative concrete pavement noise reduction methods. The first objective of this task was to determine what if any concrete surface textures currently constructed in the United States or Europe were considered quiet, had long-term friction characteristics, could be consistently built, and were cost effective. Any specifications of such concrete textures would be included in this report. The second objective was to determine whether any promising new concrete pavement surfaces to control tire-pavement noise and friction were in the development stage and, if so, what further research was necessary. The final objective was to identify measurement techniques used in the evaluation.
Resumo:
Integrative review (IR) has an international reputation in nursing research and evidence-based practice. This IR aimed at identifying and analyzing the concepts and methods recommended to undertaking IR in nursing. Nine information resources,including electronic databases and grey literature were searched. Seventeen studies were included. The results indicate that: primary studies were mostly from USA; it is possible to have several research questions or hypotheses and include primary studies in the review from different theoretical and methodological approaches; it is a type of review that can go beyond the analysis and synthesis of findings from primary studies allowing exploiting other research dimensions, and that presents potentialities for the development of new theories and new problems for research. Conclusion: IR is understood as a very complex type of review and it is expected to be developed using standardized and systematic methods to ensure the required rigor of scientific research and therefore the legitimacy of the established evidence.