879 resultados para model testing
Resumo:
Various unification schemes interpret the complex phenomenology of quasars and luminous active galactic nuclei (AGN) in terms of a simple picture involving a central black hole, an accretion disc and an associated outflow. Here, we continue our tests of this paradigm by comparing quasar spectra to synthetic spectra of biconical disc wind models, produced with our state-of-the-art Monte Carlo radiative transfer code. Previously, we have shown that we could produce synthetic spectra resembling those of observed broad absorption line (BAL) quasars, but only if the X-ray luminosity was limited to 1043 erg s-1. Here, we introduce a simple treatment of clumping, and find that a filling factor of ˜0.01 moderates the ionization state sufficiently for BAL features to form in the rest-frame UV at more realistic X-ray luminosities. Our fiducial model shows good agreement with AGN X-ray properties and the wind produces strong line emission in, e.g., Lyα and C IV 1550 Å at low inclinations. At high inclinations, the spectra possess prominent LoBAL features. Despite these successes, we cannot reproduce all emission lines seen in quasar spectra with the correct equivalent-width ratios, and we find an angular dependence of emission line equivalent width despite the similarities in the observed emission line properties of BAL and non-BAL quasars. Overall, our work suggests that biconical winds can reproduce much of the qualitative behaviour expected from a unified model, but we cannot yet provide quantitative matches with quasar properties at all viewing angles. Whether disc winds can successfully unify quasars is therefore still an open question.
Resumo:
We propose a mechanism for testing the theory of collapse models such as continuous spontaneous localization (CSL) by examining the parametric heating rate of a trapped nanosphere. The random localizations of the center-of-mass for a given particle predicted by the CSL model can be understood as a stochastic force embodying a source of heating for the nanosphere. We show that by utilising a Paul trap to levitate the particle and optical cooling, it is possible to reduce environmental decoher- ence to such a level that CSL dominates the dynamics and contributes the main source of heating. We show that this approach allows measurements to be made on the timescale of seconds, and that the free parameter λcsl which characterises the model ought to be testable to values as low as 10^{−12} Hz.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
Li-ion batteries have been widely used in electric vehicles, and battery internal state estimation plays an important role in the battery management system. However, it is technically challenging, in particular, for the estimation of the battery internal temperature and state-ofcharge (SOC), which are two key state variables affecting the battery performance. In this paper, a novel method is proposed for realtime simultaneous estimation of these two internal states, thus leading to a significantly improved battery model for realtime SOC estimation. To achieve this, a simplified battery thermoelectric model is firstly built, which couples a thermal submodel and an electrical submodel. The interactions between the battery thermal and electrical behaviours are captured, thus offering a comprehensive description of the battery thermal and electrical behaviour. To achieve more accurate internal state estimations, the model is trained by the simulation error minimization method, and model parameters are optimized by a hybrid optimization method combining a meta-heuristic algorithm and the least square approach. Further, timevarying model parameters under different heat dissipation conditions are considered, and a joint extended Kalman filter is used to simultaneously estimate both the battery internal states and time-varying model parameters in realtime. Experimental results based on the testing data of LiFePO4 batteries confirm the efficacy of the proposed method.
Testing the psychometric properties of Kidscreen-27 with Irish children of low socio-economic status
Resumo:
Background
Kidscreen-27 was developed as part of a cross-cultural European Union-funded project to standardise the measurement of children’s health-related quality of life. Yet, research has reported mixed evidence for the hypothesised 5-factor model, and no confirmatory factor analysis (CFA) has been conducted on the instrument with children of low socio-economic status (SES) across Ireland (Northern and Republic).
Method
The data for this study were collected as part of a clustered randomised controlled trial. A total of 663 (347 male, 315 female) 8–9-year-old children (M = 8.74, SD = .50) of low SES took part. A 5- and modified 7-factor CFA models were specified using the maximum likelihood estimation. A nested Chi-square difference test was conducted to compare the fit of the models. Internal consistency and floor and ceiling effects were also examined.
Results
CFA found that the hypothesised 5-factor model was an unacceptable fit. However, the modified 7-factor model was supported. A nested Chi-square difference test confirmed that the fit of the 7-factor model was significantly better than that of the 5-factor model. Internal consistency was unacceptable for just one scale. Ceiling effects were present in all but one of the factors.
Conclusions
Future research should apply the 7-factor model with children of low socio-economic status. Such efforts would help monitor the health status of the population.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The erosion processes resulting from flow of fluids (gas-solid or liquid-solid) are encountered in nature and many industrial processes. The common feature of these erosion processes is the interaction of the fluid (particle) with its boundary thus resulting in the loss of material from the surface. This type of erosion in detrimental to the equipment used in pneumatic conveying systems. The puncture of pneumatic conveyor bends in industry causes several problems. Some of which are: (1) Escape of the conveyed product causing health and dust hazard; (2) Repairing and cleaning up after punctures necessitates shutting down conveyors, which will affect the operation of the plant, thus reducing profitability. The most common occurrence of process failure in pneumatic conveying systems is when pipe sections at the bends wear away and puncture. The reason for this is particles of varying speed, shape, size and material properties strike the bend wall with greater intensity than in straight sections of the pipe. Currently available models for predicting the lifetime of bends are inaccurate (over predict by 80%. The provision of an accurate predictive method would lead to improvements in the structure of the planned maintenance programmes of processes, thus reducing unplanned shutdowns and ultimately the downtime costs associated with these unplanned shutdowns. This is the main motivation behind the current research. The paper reports on two aspects of the first phases of the study-undertaken for the current project. These are (1) Development and implementation; and (2) Testing of the modelling environment. The model framework encompasses Computational Fluid Dynamics (CFD) related engineering tools, based on Eulerian (gas) and Lagrangian (particle) approaches to represent the two distinct conveyed phases, to predict the lifetime of conveyor bends. The method attempts to account for the effect of erosion on the pipe wall via particle impacts, taking into account the angle of attack, impact velocity, shape/size and material properties of the wall and conveyed material, within a CFD framework. Only a handful of researchers use CFD as the basis of predicting the particle motion, see for example [1-4] . It is hoped that this would lead to more realistic predictions of the wear profile. Results, for two, three-dimensional test cases using the commercially available CFD PHOENICS are presented. These are reported in relation to the impact intensity and sensitivity to the inlet particle distributions.
Resumo:
Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.
Resumo:
Adult anchovies in the Bay of Biscay perform north to south migration from late winter to early summer for spawning. However, what triggers and drives the geographic shift of the population remains unclear and poorly understood. An individual-based fish model has been implemented to explore the potential mechanisms that control anchovy's movement routes toward its spawning habitats. To achieve this goal, two fish movement behaviors – gradient detection through restricted area search and kinesis – simulated fish response to its dynamic environment. A bioenergetics model was used to represent individual growth and reproduction along the fish trajectory. The environmental forcing (food, temperature) of the model was provided by a coupled physical–biogeochemical model. We followed a hypothesis-testing strategy to actualize a series of simulations using different cues and computational assumptions. The gradient detection behavior was found as the most suitable mechanism to recreate the observed shift of anchovy distribution under the combined effect of sea-surface temperature and zooplankton. In addition, our results suggested that southward movement occurred more actively from early April to middle May following favorably the spatio-temporal evolution of zooplankton and temperature. In terms of fish bioenergetics, individuals who ended up in the southern part of the bay presented better condition based on energy content, proposing the resulting energy gain as an ecological explanation for this migration. The kinesis approach resulted in a moderate performance, producing distribution pattern with the highest spread. Finally, model performance was not significantly affected by changes on the starting date, initial fish distribution and number of particles used in the simulations, whereas it was drastically influenced by the adopted cues.
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
In the landslide-prone area near the Nice international airport, southeastern France, an interdisciplinary approach is applied to develop realistic lithological/geometrical profiles and geotechnical/strength sub-seafloor models. Such models are indispensable for slope stability assessments using limit equilibrium or finite element methods. Regression analyses, based on the undrained shear strength (su) of intact gassy sediments are used to generate a sub-seafloor strength model based on 37 short dynamic and eight long static piezocone penetration tests, and laboratory experiments on one Calypso piston and 10 gravity cores. Significant strength variations were detected when comparing measurements from the shelf and the shelf break, with a significant drop in su to 5.5 kPa being interpreted as a weak zone at a depth between 6.5 and 8.5 m below seafloor (mbsf). Here, a 10% reduction of the in situ total unit weight compared to the surrounding sediments is found to coincide with coarse-grained layers that turn into a weak zone and detachment plane for former and present-day gravitational, retrogressive slide events, as seen in 2D chirp profiles. The combination of high-resolution chirp profiles and comprehensive geotechnical information allows us to compute enhanced 2D finite element slope stability analysis with undrained sediment response compared to previous 2D numerical and 3D limit equilibrium assessments. Those models suggest that significant portions (detachment planes at 20 m or even 55 mbsf) of the Quaternary delta and slope apron deposits may be mobilized. Given that factors of safety are equal or less than 1 when further considering the effect of free gas, a high risk for a landslide event of considerable size off Nice international airport is identified
Resumo:
Thin film adhesion often determines microelectronic device reliability and it is therefore essential to have experimental techniques that accurately and efficiently characterize it. Laser-induced delamination is a novel technique that uses laser-generated stress waves to load thin films at high strain rates and extract the fracture toughness of the film/substrate interface. The effectiveness of the technique in measuring the interface properties of metallic films has been documented in previous studies. The objective of the current effort is to model the effect of residual stresses on the dynamic delamination of thin films. Residual stresses can be high enough to affect the crack advance and the mode mixity of the delimitation event, and must therefore be adequately modeled to make accurate and repeatable predictions of fracture toughness. The equivalent axial force and bending moment generated by the residual stresses are included in a dynamic, nonlinear finite element model of the delaminating film, and the impact of residual stresses on the final extent of the interfacial crack, the relative contribution of shear failure, and the deformed shape of the delaminated film is studied in detail. Another objective of the study is to develop techniques to address issues related to the testing of polymeric films. These type of films adhere well to silicon and the resulting crack advance is often much smaller than for metallic films, making the extraction of the interface fracture toughness more difficult. The use of an inertial layer which enhances the amount of kinetic energy trapped in the film and thus the crack advance is examined. It is determined that the inertial layer does improve the crack advance, although in a relatively limited fashion. The high interface toughness of polymer films often causes the film to fail cohesively when the crack front leaves the weakly bonded region and enters the strong interface. The use of a tapered pre-crack region that provides a more gradual transition to the strong interface is examined. The tapered triangular pre-crack geometry is found to be effective in reducing the stresses induced thereby making it an attractive option. We conclude by studying the impact of modifying the pre-crack geometry to enable the testing of multiple polymer films.
Resumo:
Background: Among other causes the long-term result of hip prostheses in dogs is determined by aseptic loosening. A prevention of prosthesis complications can be achieved by an optimization of the tribological system which finally results in improved implant duration. In this context a computerized model for the calculation of hip joint loadings during different motions would be of benefit. In a first step in the development of such an inverse dynamic multi-body simulation (MBS-) model we here present the setup of a canine hind limb model applicable for the calculation of ground reaction forces. Methods: The anatomical geometries of the MBS-model have been established using computer tomography- (CT-) and magnetic resonance imaging- (MRI-) data. The CT-data were collected from the pelvis, femora, tibiae and pads of a mixed-breed adult dog. Geometric information about 22 muscles of the pelvic extremity of 4 mixed-breed adult dogs was determined using MRI. Kinematic and kinetic data obtained by motion analysis of a clinically healthy dog during a gait cycle (1 m/s) on an instrumented treadmill were used to drive the model in the multi-body simulation. Results and Discussion: As a result the vertical ground reaction forces (z-direction) calculated by the MBS-system show a maximum deviation of 1.75%BW for the left and 4.65%BW for the right hind limb from the treadmill measurements. The calculated peak ground reaction forces in z- and y-direction were found to be comparable to the treadmill measurements, whereas the curve characteristics of the forces in y-direction were not in complete alignment. Conclusion: In conclusion, it could be demonstrated that the developed MBS-model is suitable for simulating ground reaction forces of dogs during walking. In forthcoming investigations the model will be developed further for the calculation of forces and moments acting on the hip joint during different movements, which can be of help in context with the in silico development and testing of hip prostheses.
Resumo:
Doutoramento em Economia
Resumo:
Generating sample models for testing a model transformation is no easy task. This paper explores the use of classifying terms and stratified sampling for developing richer test cases for model transformations. Classifying terms are used to define the equivalence classes that characterize the relevant subgroups for the test cases. From each equivalence class of object models, several representative models are chosen depending on the required sample size. We compare our results with test suites developed using random sampling, and conclude that by using an ordered and stratified approach the coverage and effectiveness of the test suite can be significantly improved.