952 resultados para Non destructive testing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recursive Learning Control (RLC) has the potential to significantly reduce the tracking error in many repetitive trajectory applications. This paper presents an application of RLC to a soil testing load frame where non-adaptive techniques struggle with the highly nonlinear nature of soil. The main purpose of the controller is to apply a sinusoidal force reference trajectory on a soil sample with a high degree of accuracy and repeatability. The controller uses a feedforward control structure, recursive least squares adaptation algorithm and RLC to compensate for periodic errors. Tracking error is reduced and stability is maintained across various soil sample responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within generative L2 acquisition research there is a longstanding debate as to what underlies observable differences in L1/L2 knowledge/ performance. On the one hand, Full Accessibility approaches maintain that target L2 syntactic representations (new functional categories and features) are acquirable (e.g., Schwartz & Sprouse, 1996). Conversely, Partial Accessibility approaches claim that L2 variability and/or optionality, even at advanced levels, obtains as a result of inevitable deficits in L2 narrow syntax and is conditioned upon a maturational failure in adulthood to acquire (some) new functional features (e.g., Beck, 1998; Hawkins & Chan, 1997; Hawkins & Hattori, 2006; Tsimpli & Dimitrakopoulou, 2007). The present study tests the predictions of these two sets of approaches with advanced English learners of L2 Brazilian Portuguese (n = 21) in the domain of inflected infinitives. These advanced L2 learners reliably differentiate syntactically between finite verbs, uninflected and inflected infinitives, which, as argued, only supports Full Accessibility approaches. Moreover, we will discuss how testing the domain of inflected infinitives is especially interesting in light of recent proposals that Brazilian Portuguese colloquial dialects no longer actively instantiate them (Lightfoot, 1991; Pires, 2002, 2006; Pires & Rothman, 2009; Rothman, 2007).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The orographic gravity wave drag produced in flow over an axisymmetric mountain when both vertical wind shear and non-hydrostatic effects are important was calculated using a semi-analytical two-layer linear model, including unidirectional or directional constant wind shear in a layer near the surface, above which the wind is constant. The drag behaviour is determined by partial wave reflection at the shear discontinuity, wave absorption at critical levels (both of which exist in hydrostatic flow), and total wave reflection at levels where the waves become evanescent (an intrinsically non-hydrostatic effect), which produces resonant trapped lee wave modes. As a result of constructive or destructive wave interference, the drag oscillates with the thickness of the constant-shear layer and the Richardson number within it (Ri), generally decreasing at low Ri and when the flow is strongly non-hydrostatic. Critical level absorption, which increases with the angle spanned by the wind velocity in the constant-shear layer, shields the surface from reflected waves, keeping the drag closer to its hydrostatic limit. While, for the parameter range considered here, the drag seldom exceeds this limit, a substantial drag fraction may be produced by trapped lee waves, particularly when the flow is strongly non-hydrostatic, the lower layer is thick and Ri is relatively high. In directionally sheared flows with Ri = O(1), the drag may be misaligned with the surface wind in a direction opposite to the shear, a behaviour which is totally due to non-trapped waves. The trapped lee wave drag, whose reaction force on the atmosphere is felt at low levels, may therefore have a distinctly different direction from the drag associated with vertically propagating waves, which acts on the atmosphere at higher levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among all the paradigms in economic theory, the theoretical predictions of oligopoly were the first to be examined in the laboratory. In this chapter, instead of surveying all the experiments with few sellers, we adopt a narrower definition of the term “oligopoly”, and focus on the experiments that were directly inspired by the basic oligopolistic models of Cournot, Bertrand, Hotelling, Stackelberg, and some extensions. Most of the experiments we consider in this chapter have been run in the last three decades. This literature can be considered as a new wave of experimental works aiming at representing basic oligopolistic markets and testing their properties. The chapter is divided into independent sections referring to different parts of the oligopolistic theory, including both monopoly as well as a number of extensions of the basic models, which have been chosen with the aim of providing a representative list of the relevant experimental findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correlation between the microdilution (MD), Etest (R) (ET), and disk diffusion (DD) methods was determined for amphotericin B, itraconazole and fluconazole. The minimal inhibitory concentration (MIC) of those antifungal agents was established for a total of 70 Candida spp. isolates from colonization and infection. The species distribution was: Candida albicans (n = 27), C. tropicalis (n = 17), C. glabrata (n = 16), C. parapsilosis (n = 8), and C. lusitaniae (n = 2). Non-Candida albicans Candida species showed higher MICs for the three antifungal agents when compared with C. albicans isolates. The overall concordance (based on the MIC value obtained within two dilutions) between the ET and the MD method was 83% for amphotericin B, 63% for itraconazole, and 64% for fluconazole. Considering the breakpoint, the agreement between the DD and MD methods was 71% for itraconazole and 67% for fluconazole. The DD zone diameters are highly reproducible and correlate well with the MD method, making agar-based methods a viable alternative to MD for susceptibility testing. However, data on agar-based tests for itraconazole and amphotericin B are yet scarce. Thus, further research must still be carded out to ensure the standardization to other antifungal agents. J. Clin. Lab. Anal. 23:324-330, 2009. (C) 2009 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical hypothesis testing focuses on testing whether treatments have differential effects on outcome. However, sometimes clinicians may be more interested in determining whether treatments are equivalent or whether one has noninferior outcomes. We review the hypotheses for these noninferiority and equivalence research questions, consider power and sample size issues, and discuss how to perform such a test for both binary and survival outcomes. The methods are illustrated on 2 recent studies in hematopoietic cell transplantation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this paper is to show the possibility of a non-monotone relation between coverage ans risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuous parameter which is correlated with lenience and for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cosr of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and iplies a positive correlation between overage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the SCP be broken, but also the monotonocity of contracts, i.e., the prediction that high (low) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case there are some coverage levels associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation between coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to desentangle single crossing ans non single croosing under an ex-post zero correlation result: the monotonicity of coverage as a function os riskiness. Since by controlling for risk aversion (no asymmetric information), coverage is monotone function of riskiness, this also fives a test for asymmetric information. Finally, we relate this theoretical results to empirical tests in the recent literature, specially the Dionne, Gouruéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variables (asymmetric information) can bias the sign of the correlation of equilibrium variables conditioning on all observable variables. We show that this may be the case when the omitted variables have a non-monotonic relation with the observable ones. Moreover, because this non-dimensional does not capture this deature. Hence, our main results is to point out the importance of the SPC in testing predictions of the hidden information models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tese é composta por três ensaios sobre testes empíricos de curvas de Phillips, curvas IS e a interação entre as políticas fiscal e monetária. O primeiro ensaio ("Curvas de Phillips: um Teste Abrangente") testa curvas de Phillips usando uma especificação autoregressiva de defasagem distribuída (ADL) que abrange a curva de Phillips Aceleracionista (APC), a curva de Phillips Novo Keynesiana (NKPC), a curva de Phillips Híbrida (HPC) e a curva de Phillips de Informação Rígida (SIPC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2), usando o hiato do produto e alternativamente o custo marginal real como medida de pressão inflacionária. A evidência empírica rejeita as restrições decorrentes da NKPC, da HPC e da SIPC, mas não rejeita aquelas da APC. O segundo ensaio ("Curvas IS: um Teste Abrangente") testa curvas IS usando uma especificação ADL que abrange a curva IS Keynesiana tradicional (KISC), a curva IS Novo Keynesiana (NKISC) e a curva IS Híbrida (HISC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2). A evidência empírica rejeita as restrições decorrentes da NKISC e da HISC, mas não rejeita aquelas da KISC. O terceiro ensaio ("Os Efeitos da Política Fiscal e suas Interações com a Política Monetária") analisa os efeitos de choques na política fiscal sobre a dinâmica da economia e a interação entre as políticas fiscal e monetária usando modelos SVARs. Testamos a Teoria Fiscal do Nível de Preços para o Brasil analisando a resposta do passivo do setor público a choques no superávit primário. Para a identificação híbrida, encontramos que não é possível distinguir empiricamente entre os regimes Ricardiano (Dominância Monetária) e não-Ricardiano (Dominância Fiscal). Entretanto, utilizando a identificação de restrições de sinais, existe evidência que o governo seguiu um regime Ricardiano (Dominância Monetária) de janeiro de 2000 a junho de 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of t.his paper is to show the possibility of a non-monot.one relation between coverage and risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuou.'l parameter which is correlated with lenience and, for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cost of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and implies a positive correlation between coverage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the sep be broken, but also the monotonicity of contracts, i.e., the prediction that high (Iow) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case t,here are some coverage leveIs associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation bet,ween coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to disentangle single crossing and non single crossing under an ex-post zero correlation result: the monotonicity of coverage as a function of riskiness. Since by controlling for risk aversion (no asymmetric informat, ion), coverage is a monotone function of riskiness, this also gives a test for asymmetric information. Finally, we relate this theoretical results to empirica! tests in the recent literature, specially the Dionne, Gouriéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variabIes (asymmetric information) can bias the sign of the correlation of equilibrium variabIes conditioning on ali observabIe variabIes. We show that this may be t,he case when the omitted variabIes have a non-monotonic reIation with t,he observable ones. Moreover, because this non-monotonic reIat,ion is deepIy reIated with the failure of the SCP in one-dimensional screening problems, the existing lit.erature on asymmetric information does not capture t,his feature. Hence, our main result is to point Out the importance of t,he SCP in testing predictions of the hidden information models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the estimation and testing of conditional duration models by looking at the density and baseline hazard rate functions. More precisely, we foeus on the distance between the parametric density (or hazard rate) function implied by the duration process and its non-parametric estimate. Asymptotic justification is derived using the functional delta method for fixed and gamma kernels, whereas finite sample properties are investigated through Monte Carlo simulations. Finally, we show the practical usefulness of such testing procedures by carrying out an empirical assessment of whether autoregressive conditional duration models are appropriate to oIs for modelling price durations of stocks traded at the New York Stock Exchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 2024 and 7050 aluminium alloys used as aircraft components were subjected to laboratory corrosion tests in sodium chloride solution, Light-microscope examinations make it possible to characterise morphological aspects of the localised corrosion. Image analysis was used to determine both depth and width of pits over corroded surfaces. It has been concluded that the annealing could reduce the pit growth in both alloys, by means of grains recrystallization or recovery. The 2024 alloy also tends to present an exfoliation mechanism, mainly throughout non-recrystallized and recrystallized grain boundaries, increasing the width and sustaining the depth of pit cavities during exposition to saline atmosphere. SEM and EDS analysis reveal the morphology and elemental distribution of the corrosion products formed after immersion corrosion test. Some of these products were identified by X-ray diffraction analysis. For 2024, Al(OH)(3), MS(OH)(2) and Cu2O were found. AI(OH)(3) and Cu2O were also found in 7050 samples.