997 resultados para Parameter testing
Resumo:
There has been great interest in deciding whether a combinatorial structure satisfies some property, or in estimating the value of some numerical function associated with this combinatorial structure, by considering only a randomly chosen substructure of sufficiently large, but constant size. These problems are called property testing and parameter testing, where a property or parameter is said to be testable if it can be estimated accurately in this way. The algorithmic appeal is evident, as, conditional on sampling, this leads to reliable constant-time randomized estimators. Our paper addresses property testing and parameter testing for permutations in a subpermutation perspective; more precisely, we investigate permutation properties and parameters that can be well approximated based on a randomly chosen subpermutation of much smaller size. In this context, we use a theory of convergence of permutation sequences developed by the present authors [C. Hoppen, Y. Kohayakawa, C.G. Moreira, R.M. Sampaio, Limits of permutation sequences through permutation regularity, Manuscript, 2010, 34pp.] to characterize testable permutation parameters along the lines of the work of Borgs et al. [C. Borgs, J. Chayes, L Lovasz, V.T. Sos, B. Szegedy, K. Vesztergombi, Graph limits and parameter testing, in: STOC`06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, ACM, New York, 2006, pp. 261-270.] in the case of graphs. Moreover, we obtain a permutation result in the direction of a famous result of Alon and Shapira [N. Alon, A. Shapira, A characterization of the (natural) graph properties testable with one-sided error, SIAM J. Comput. 37 (6) (2008) 1703-1727.] stating that every hereditary graph property is testable. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND Peripheral arterial disease (PAD) is a progressive vascular disease associated with a high risk of cardiovascular morbidity and death. Antithrombotic prevention is usually applied by prescribing the antiplatelet agent aspirin. However, in patients with PAD aspirin fails to provide protection against myocardial infarction and death, only reducing the risk of ischemic stroke. Platelets may play a role in disease development, but this has not been tested by proper mechanistic studies. In the present study, we performed a systematic evaluation of platelet reactivity in whole blood from patients with PAD using two high-throughput assays, i.e. multi-agonist testing of platelet activation by flow cytometry and multi-parameter testing of thrombus formation on spotted microarrays. METHODS Blood was obtained from 40 patients (38 on aspirin) with PAD in majority class IIa/IIb and from 40 age-matched control subjects. Whole-blood flow cytometry and multiparameter thrombus formation under high-shear flow conditions were determined using recently developed and validated assays. RESULTS Flow cytometry of whole blood samples from aspirin-treated patients demonstrated unchanged high platelet responsiveness towards ADP, slightly elevated responsiveness after glycoprotein VI stimulation, and decreased responsiveness after PAR1 thrombin receptor stimulation, compared to the control subjects. Most parameters of thrombus formation under flow were similarly high for the patient and control groups. However, in vitro aspirin treatment caused a marked reduction in thrombus formation, especially on collagen surfaces. When compared per subject, markers of ADP- and collagen-induced integrin activation (flow cytometry) strongly correlated with parameters of collagen-dependent thrombus formation under flow, indicative of a common, subject-dependent regulation of both processes. CONCLUSION Despite of the use of aspirin, most platelet activation properties were in the normal range in whole-blood from class II PAD patients. These data underline the need for more effective antithrombotic pharmacoprotection in PAD.
Resumo:
Phyllotaxis patterns in plants, or the arrangement of leaves and flowers radially around the shoot, have fascinated both biologists and mathematicians for centuries. The current model of this process involves the lateral transport of the hormone auxin through the first layer of cells in the shoot apical meristem via the auxin efflux carrier protein PIN1. Locations around the meristem with high auxin concentration are sites of organ formation and differentiation. Many of the molecular players in this process are well known and characterized. Computer models composed of all these components are able to produce many of the observed phyllotaxis patterns. To understand which parts of this model have a large effect on the phenotype I automated parameter testing and tried many different parameter combinations. Results of this showed that cell size and meristem size should have the largest effect on phyllotaxis. This lead to three questions: (1) How is cell geometry regulated? (2) Does cell size affect auxin distribution? (3) Does meristem size affect phyllotaxis? To answer the first question I tracked cell divisions in live meristems and quantified the geometry of the cells and the division planes using advanced image processing techniques. The results show that cell shape is maintained by minimizing the length of the new wall and by minimizing the difference in area of the daughter cells. To answer the second question I observed auxin patterning in the meristem, shoot, leaves, and roots of Arabidopsis mutants with larger and smaller cell sizes. In the meristem and shoot, cell size plays an important role in determining the distribution of auxin. Observations of auxin in the root and leaves are less definitive. To answer the third question I measured meristem sizes and phyllotaxis patterns in mutants with altered meristem sizes. These results show that there is no correlation between meristem size and average divergence angle. But in an extreme case, making the meristem very small does lead to a switch on observed phyllotaxis in accordance with the model.
Resumo:
INTRODUCTION: A growing body of evidence shows the prognostic value of oxygen uptake efficiency slope (OUES), a cardiopulmonary exercise test (CPET) parameter derived from the logarithmic relationship between O(2) consumption (VO(2)) and minute ventilation (VE) in patients with chronic heart failure (CHF). OBJECTIVE: To evaluate the prognostic value of a new CPET parameter - peak oxygen uptake efficiency (POUE) - and to compare it with OUES in patients with CHF. METHODS: We prospectively studied 206 consecutive patients with stable CHF due to dilated cardiomyopathy - 153 male, aged 53.3±13.0 years, 35.4% of ischemic etiology, left ventricular ejection fraction 27.7±8.0%, 81.1% in sinus rhythm, 97.1% receiving ACE-Is or ARBs, 78.2% beta-blockers and 60.2% spironolactone - who performed a first maximal symptom-limited treadmill CPET, using the modified Bruce protocol. In 33% of patients an cardioverter-defibrillator (ICD) or cardiac resynchronization therapy device (CRT-D) was implanted during follow-up. Peak VO(2), percentage of predicted peak VO(2), VE/VCO(2) slope, OUES and POUE were analyzed. OUES was calculated using the formula VO(2) (l/min) = OUES (log(10)VE) + b. POUE was calculated as pVO(2) (l/min) / log(10)peakVE (l/min). Correlation coefficients between the studied parameters were obtained. The prognosis of each variable adjusted for age was evaluated through Cox proportional hazard models and R2 percent (R2%) and V index (V6) were used as measures of the predictive accuracy of events of each of these variables. Receiver operating characteristic (ROC) curves from logistic regression models were used to determine the cut-offs for OUES and POUE. RESULTS: pVO(2): 20.5±5.9; percentage of predicted peak VO(2): 68.6±18.2; VE/VCO(2) slope: 30.6±8.3; OUES: 1.85±0.61; POUE: 0.88±0.27. During a mean follow-up of 33.1±14.8 months, 45 (21.8%) patients died, 10 (4.9%) underwent urgent heart transplantation and in three patients (1.5%) a left ventricular assist device was implanted. All variables proved to be independent predictors of this combined event; however, VE/VCO2 slope was most strongly associated with events (HR 11.14). In this population, POUE was associated with a higher risk of events than OUES (HR 9.61 vs. 7.01), and was also a better predictor of events (R2: 28.91 vs. 22.37). CONCLUSION: POUE was more strongly associated with death, urgent heart transplantation and implantation of a left ventricular assist device and proved to be a better predictor of events than OUES. These results suggest that this new parameter can increase the prognostic value of CPET in patients with CHF.
Resumo:
The primary objective of this research study is to determine which form of testing, the PEST algorithm or an operator-controlled condition is most accurate and time efficient for administration of the gaze stabilization test
Resumo:
Federal Highway Administration, Office of Research and Development, Washington, D.C.
Resumo:
In this Letter, we propose a new and model-independent cosmological test for the distance-duality (DD) relation, eta = D(L)(z)(1 + z)(-2)/D(A)(z) = 1, where D(L) and D(A) are, respectively, the luminosity and angular diameter distances. For D(L) we consider two sub-samples of Type Ia supernovae (SNe Ia) taken from Constitution data whereas D(A) distances are provided by two samples of galaxy clusters compiled by De Filippis et al. and Bonamente et al. by combining Sunyaev-Zeldovich effect and X-ray surface brightness. The SNe Ia redshifts of each sub-sample were carefully chosen to coincide with the ones of the associated galaxy cluster sample (Delta z < 0.005), thereby allowing a direct test of the DD relation. Since for very low redshifts, D(A)(z) approximate to D(L)(z), we have tested the DD relation by assuming that. is a function of the redshift parameterized by two different expressions: eta(z) = 1 + eta(0)z and eta(z) = 1 +eta(0)z/(1 + z), where eta(0) is a constant parameter quantifying a possible departure from the strict validity of the reciprocity relation (eta(0) = 0). In the best scenario (linear parameterization), we obtain eta(0) = -0.28(-0.44)(+0.44) (2 sigma, statistical + systematic errors) for the De Filippis et al. sample (elliptical geometry), a result only marginally compatible with the DD relation. However, for the Bonamente et al. sample (spherical geometry) the constraint is eta(0) = -0.42(-0.34)(+0.34) (3 sigma, statistical + systematic errors), which is clearly incompatible with the duality-distance relation.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
The main goals of the present work are the evaluation of the influence of several variables and test parameters on the melt flow index (MFI) of thermoplastics, and the determination of the uncertainty associated with the measurements. To evaluate the influence of test parameters on the measurement of MFI the design of experiments (DOE) approach has been used. The uncertainty has been calculated using a "bottom-up" approach given in the "Guide to the Expression of the Uncertainty of Measurement" (GUM). Since an analytical expression relating the output response (MFI) with input parameters does not exist, it has been necessary to build mathematical models by adjusting the experimental observations of the response variable in accordance with each input parameter. Subsequently, the determination of the uncertainty associated with the measurement of MFI has been performed by applying the law of propagation of uncertainty to the values of uncertainty of the input parameters. Finally, the activation energy (Ea) of the melt flow at around 200 degrees C and the respective uncertainty have also been determined.
Resumo:
This paper focuses on a PV system linked to the electric grid by power electronic converters, identification of the five parameters modeling for photovoltaic systems and the assessment of the shading effect. Normally, the technical information for photovoltaic panels is too restricted to identify the five parameters. An undemanding heuristic method is used to find the five parameters for photovoltaic systems, requiring only the open circuit, maximum power, and short circuit data. The I- V and the P- V curves for a monocrystalline, polycrystalline and amorphous photovoltaic systems are computed from the parameters identification and validated by comparison with experimental ones. Also, the I- V and the P- V curves under the effect of partial shading are obtained from those parameters. The modeling for the converters emulates the association of a DC-DC boost with a two-level power inverter in order to follow the performance of a testing commercial inverter employed on an experimental system. © 2015 Elsevier Ltd.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
This paper focuses on a PV system linked to the electric grid by power electronic converters, identification of the five parameters modeling for photovoltaic systems and the assessment of the shading effect. Normally, the technical information for photovoltaic panels is too restricted to identify the five parameters. An undemanding heuristic method is used to find the five parameters for photovoltaic systems, requiring only the open circuit, maximum power, and short circuit data. The I–V and the P–V curves for a monocrystalline, polycrystalline and amorphous photovoltaic systems are computed from the parameters identification and validated by comparison with experimental ones. Also, the I–V and the P–V curves under the effect of partial shading are obtained from those parameters. The modeling for the converters emulates the association of a DC–DC boost with a two-level power inverter in order to follow the performance of a testing commercial inverter employed on an experimental system.
Resumo:
In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by simulation. The modelling strategy is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns.
Resumo:
The aims of this study were to determine whether responses in myocardial blood flow (MBF) to the cold pressor testing (CPT) method noninvasively with PET correlate with an established and validated index of flow-dependent coronary vasomotion on quantitative angiography. METHODS: Fifty-six patients (57 +/- 6 y; 16 with hypertension, 10 with hypercholesterolemia, 8 smokers, and 22 without coronary risk factors) with normal coronary angiograms were studied. Biplanar end-diastolic images of a selected proximal segment of the left anterior descending artery (LAD) (n = 27) or left circumflex artery (LCx) (n = 29) were evaluated with quantitative coronary angiography in order to determine the CPT-induced changes of epicardial luminal area (LA, mm(2)). Within 20 d of coronary angiography, MBF in the LAD, LCx, and right coronary artery territory was measured with (13)N-ammonia and PET at baseline and during CPT. RESULTS: CPT induced on both study days comparable percent changes in the rate x pressure product (%DeltaRPP, 37% +/- 13% and 40% +/- 17%; P = not significant [NS]). For the entire study group, the epicardial LA decreased from 5.07 +/- 1.02 to 4.88 +/- 1.04 mm(2) (DeltaLA, -0.20 +/- 0.89 mm(2)) or by -2.19% +/- 17%, while MBF in the corresponding epicardial vessel segment increased from 0.76 +/- 0.16 to 1.03 +/- 0.33 mL x min(-1) x g(-1) (DeltaMBF, 0.27 +/- 0.25 mL x min(-1) x g(-1)) or 36% +/- 31% (P <or= 0.0001). However, in normal controls without coronary risk factors (n = 22), the epicardial LA increased from 5.01 +/- 1.07 to 5.88 +/- 0.89 mm(2) (19.06% +/- 8.9%) and MBF increased from 0.77 +/- 0.16 to 1.34 +/- 0.34 mL x min(-1) x g(-1) (74.08% +/- 23.5%) during CPT, whereas patients with coronary risk factors (n = 34) revealed a decrease of epicardial LA from 5.13 +/- 1.48 to 4.24 +/- 1.12 mm(2) (-15.94% +/- 12.2%) and a diminished MBF increase (from 0.76 +/- 0.20 to 0.83 +/- 0.25 mL x min(-1) x g(-1) or 10.91% +/- 19.8%) as compared with controls (P < 0.0001, respectively), despite comparable changes in the RPP (P = NS). In addition, there was a significant correlation (r = 0.87; P <or= 0.0001) between CPT-related percent changes in LA on quantitative angiography and in MBF as measured with PET. CONCLUSION: The observed close correlation between an angiographically established parameter of flow-dependent and, most likely, endothelium-mediated coronary vasomotion and PET-measured MBF further supports the validity and value of MBF responses to CPT as a noninvasively available index of coronary circulatory function.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.