959 resultados para testing method
Resumo:
The quality of astronomical sites is the first step to be considered to have the best performances from the telescopes. In particular, the efficiency of large telescopes in UV, IR, radio etc. is critically dependent on atmospheric transparency. It is well known that the random optical effects induced on the light propagation by turbulent atmosphere also limit telescope’s performances. Nowadays, clear appears the importance to correlate the main atmospheric physical parameters with the optical quality reachable by large aperture telescopes. The sky quality evaluation improved with the introduction of new techniques, new instrumentations and with the understanding of the link between the meteorological (or synoptical parameters and the observational conditions thanks to the application of the theories of electromagnetic waves propagation in turbulent medias: what we actually call astroclimatology. At the present the site campaigns are evolved and are performed using the classical scheme of optical seeing properties, meteorological parameters, sky transparency, sky darkness and cloudiness. New concept are added and are related to the geophysical properties such as seismicity, microseismicity, local variability of the climate, atmospheric conditions related to the ground optical turbulence and ground wind regimes, aerosol presence, use of satellite data. The purpose of this project is to provide reliable methods to analyze the atmospheric properties that affect ground-based optical astronomical observations and to correlate them with the main atmospheric parameters generating turbulence and affecting the photometric accuracy. The first part of the research concerns the analysis and interpretation of longand short-time scale meteorological data at two of the most important astronomical sites located in very different environments: the Paranal Observatory in the Atacama Desert (Chile), and the Observatorio del Roque de Los Muchachos(ORM) located in La Palma (Canary Islands, Spain). The optical properties of airborne dust at ORM have been investigated collecting outdoor data using a ground-based dust monitor. Because of its dryness, Paranal is a suitable observatory for near-IR observations, thus the extinction properties in the spectral range 1.00-2.30 um have been investigated using an empirical method. Furthermore, this PhD research has been developed using several turbulence profilers in the selection of the site for the European Extremely Large Telescope(E-ELT). During the campaigns the properties of the turbulence at different heights at Paranal and in the sites located in northern Chile and Argentina have been studied. This given the possibility to characterize the surface layer turbulence at Paranal and its connection with local meteorological conditions.
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
La tesi di laurea presentata si inserisce nell’ampio contesto della Sicurezza Informatica, in particolare tratta il problema del testing dei sistemi di sicurezza concepiti per contrapporsi alle odierne minacce: gli attacchi mirati (Targeted Attacks) ed in generale le minacce avanzate persistenti (Advanced Persistent Threats). Il principale obiettivo del lavoro svolto è lo sviluppo e la discussione di una metodologia di test per sistemi di sicurezza focalizzati su questo genere di problemi. Le linee guida proposte hanno lo scopo di aiutare a colmare il divario tra quello che viene testato e quello che in realt`a deve essere affrontato realmente. Le attività svolte durante la preparazione della tesi sono state sia di tipo teorico, per quanto concerne lo sviluppo di una metodologia per affrontare al meglio il testing di sistemi di sicurezza a fronte di attacchi mirati, che ne di tipo sperimentale in quanto si sono utilizzati tali concetti per lo svolgimento di test su più strumenti di difesa in uno scenario d’interesse reale.
Resumo:
In areas of seasonal frost, frost susceptibility composed by frost heaving during the winter and thaw softening during the spring is one of the most dangerous phenomenon for transportation, road and railway infrastructure. Therefore, the need for frost protection layer becomes imperative. The purpose of frost protection layer is to prevent frost from penetrating down through the pavement and into the sub-soils. Frost susceptible soils under the road can be cause damages on the roads or other structures due to frost heave or reduced capacity characteristics thaw period. "Frost heave" is the term given to the upwards displacement of the ground surface caused by the formation of ice within soils or aggregates (Rempel et al., 2004). Nowadays in Scandinavia the most common material used in frost protection layer in the pavement structure of roads and in the ballast of the railway tracks are coarse-grain crushed rocks aggregates. Based on the capillary rise, the mechanics of frost heave phenomenon is based on the interaction between aggregates and water, as suggested by Konrad and Lemieux in 2005 that said that the fraction of material below the 0.063 mm sieve for coarse-grained soils must be controlled so as to reduce the sensitivity to frost heave. The study conducted in this thesis project is divided in two parts: - the analysis of the coarse grained aggregates used in frost protection layer in Norway; - the analysis of the frost heave phenomenon in the laboratory under known boundary conditions, through the use of the most widely used method, the frost heave test, in” closed system” (without access of water).
Resumo:
A liquid chromatography tandem mass spectrometry (LC-MS/MS) confirmatory method for the simultaneous determination of nine corticosteroids in liver, including the four MRL compounds listed in Council Regulation 37/2010, was developed. After an enzymatic deconjugation and a solvent extraction of the liver tissue, the resulting solution was cleaned up through an SPE Oasis HLB cartridge. The analytes were then detected by liquid chromatography-negative-ion electrospray tandem mass spectrometry, using deuterium-labelled internal standards. The procedure was validated as a quantitative confirmatory method according to the Commission Decision 2002/657/EC criteria. The results showed that the method was suitable for statutory residue testing regarding the following performance characteristics: instrumental linearity, specificity, precision (repeatability and intra-laboratory reproducibility), recovery, decision limit (CCα), detection capability (CCβ) and ruggedness. All the corticosteroids can be detected at a concentration around 1 μg kg(-1); the recoveries were above 62% for all the analytes. Repeatability and reproducibility (within-laboratory reproducibility) for all the analytes were below 7.65% and 15.5%, respectively.
Resumo:
Using an in silico allergen clustering method, we have recently shown that allergen extracts are highly cross-reactive. Here we used serological data from a multi-array IgE test based on recombinant or highly purified natural allergens to evaluate whether co-reactions are true cross-reactions or co-sensitizations by allergens with the same motifs.
Resumo:
In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.
Resumo:
BACKGROUND: Chronic neck pain after whiplash injury is caused by cervical zygapophysial joints in 50% of patients. Diagnostic blocks of nerves supplying the joints are performed using fluoroscopy. The authors' hypothesis was that the third occipital nerve can be visualized and blocked with use of an ultrasound-guided technique. METHODS: In 14 volunteers, the authors placed a needle ultrasound-guided to the third occipital nerve on both sides of the neck. They punctured caudal and perpendicular to the 14-MHz transducer. In 11 volunteers, 0.9 ml of either local anesthetic or normal saline was applied in a randomized, double-blind, crossover manner. Anesthesia was controlled in the corresponding skin area by pinprick and cold testing. The position of the needle was controlled by fluoroscopy. RESULTS: The third occipital nerve could be visualized in all subjects and showed a median diameter of 2.0 mm. Anesthesia was missing after local anesthetic in only one case. There was neither anesthesia nor hyposensitivity after any of the saline injections. The C2-C3 joint, in a transversal plane visualized as a convex density, was identified correctly by ultrasound in 27 of 28 cases, and 23 needles were placed correctly into the target zone. CONCLUSIONS: The third occipital nerve can be visualized and blocked with use of an ultrasound-guided technique. The needles were positioned accurately in 82% of cases as confirmed by fluoroscopy; the nerve was blocked in 90% of cases. Because ultrasound is the only available technique today to visualize this nerve, it seems to be a promising new method for block guidance instead of fluoroscopy.
Resumo:
Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.
Testing the structural and cross-cultural validity of the KIDSCREEN-27 quality of life questionnaire
Resumo:
OBJECTIVES: The aim of this study is to assess the structural and cross-cultural validity of the KIDSCREEN-27 questionnaire. METHODS: The 27-item version of the KIDSCREEN instrument was derived from a longer 52-item version and was administered to young people aged 8-18 years in 13 European countries in a cross-sectional survey. Structural and cross-cultural validity were tested using multitrait multi-item analysis, exploratory and confirmatory factor analysis, and Rasch analyses. Zumbo's logistic regression method was applied to assess differential item functioning (DIF) across countries. Reliability was assessed using Cronbach's alpha. RESULTS: Responses were obtained from n = 22,827 respondents (response rate 68.9%). For the combined sample from all countries, exploratory factor analysis with procrustean rotations revealed a five-factor structure which explained 56.9% of the variance. Confirmatory factor analysis indicated an acceptable model fit (RMSEA = 0.068, CFI = 0.960). The unidimensionality of all dimensions was confirmed (INFIT: 0.81-1.15). Differential item functioning (DIF) results across the 13 countries showed that 5 items presented uniform DIF whereas 10 displayed non-uniform DIF. Reliability was acceptable (Cronbach's alpha = 0.78-0.84 for individual dimensions). CONCLUSIONS: There was substantial evidence for the cross-cultural equivalence of the KIDSCREEN-27 across the countries studied and the factor structure was highly replicable in individual countries. Further research is needed to correct scores based on DIF results. The KIDSCREEN-27 is a new short and promising tool for use in clinical and epidemiological studies.
Resumo:
AIMS: To determine whether the current practice of sweat testing in Swiss hospitals is consistent with the current international guidelines. METHODS: A questionnaire was mailed to all children's hospitals (n = 8), regional paediatric sections of general hospitals (n = 28), and all adult pulmonology centres (n = 8) in Switzerland which care for patients with cystic fibrosis (CF). The results were compared with published "guidelines 2000" of the American National Committee for Clinical Laboratory Standards (NCCLS) and the UK guidelines of 2003. RESULTS: The response rate was 89%. All 8 children's hospitals and 18 out of 23 answering paediatric sections performed sweat tests but none of the adult pulmonology centres. In total, 1560 sweat tests (range: 5-200 tests/centre/year, median 40) per year were done. 88% (23/26) were using Wescor systems, 73% (19/26) the Macroduct system for collecting sweat and 31% (8/26) the Nanoduct system. Sweat chloride was determined by only 62% (16/26) of all centres; of these, only 63% (10/16) indicated to use the recommended diagnostic chloride-CF-reference value of >60 mmol/l. Osmolality was measured in 35%, sodium in 42% and conductivity in 62% of the hospitals. Sweat was collected for maximal 30-120 (median 55) minutes; only three centres used the maximal 30 minutes sample time recommended by the international guidelines. CONCLUSIONS: Sweat testing practice in Swiss hospitals was inconsistent and seldom followed the current international guidelines for sweat collection, analyzing method and reference values. Only 62% were used the chloride concentration as a diagnostic reference, the only accepted diagnostic measurement by the NCCLS or UK guidelines.
Resumo:
The penetration, translocation, and distribution of ultrafine and nanoparticles in tissues and cells are challenging issues in aerosol research. This article describes a set of novel quantitative microscopic methods for evaluating particle distributions within sectional images of tissues and cells by addressing the following questions: (1) is the observed distribution of particles between spatial compartments random? (2) Which compartments are preferentially targeted by particles? and (3) Does the observed particle distribution shift between different experimental groups? Each of these questions can be addressed by testing an appropriate null hypothesis. The methods all require observed particle distributions to be estimated by counting the number of particles associated with each defined compartment. For studying preferential labeling of compartments, the size of each of the compartments must also be estimated by counting the number of points of a randomly superimposed test grid that hit the different compartments. The latter provides information about the particle distribution that would be expected if the particles were randomly distributed, that is, the expected number of particles. From these data, we can calculate a relative deposition index (RDI) by dividing the observed number of particles by the expected number of particles. The RDI indicates whether the observed number of particles corresponds to that predicted solely by compartment size (for which RDI = 1). Within one group, the observed and expected particle distributions are compared by chi-squared analysis. The total chi-squared value indicates whether an observed distribution is random. If not, the partial chi-squared values help to identify those compartments that are preferential targets of the particles (RDI > 1). Particle distributions between different groups can be compared in a similar way by contingency table analysis. We first describe the preconditions and the way to implement these methods, then provide three worked examples, and finally discuss the advantages, pitfalls, and limitations of this method.
Resumo:
Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.
Resumo:
There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.