958 resultados para Calibration estimators
Resumo:
Peer reviewed
Resumo:
Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.
Resumo:
We provide a new multivariate calibration-function based on South Atlantic modern assemblages of planktonic foraminifera and atlas water column parameters from the Antarctic Circumpolar Current to the Subtropical Gyre and tropical warm waters (i.e., 60°S to 0°S). Therefore, we used a dataset with the abundance pattern of 35 taxonomic groups of planktonic foraminifera in 141 surface sediment samples. Five factors were taken into consideration for the analysis, which account for 93% of the total variance of the original data representing the regional main oceanographic fronts. The new calibration-function F141-35-5 enables the reconstruction of Late Quaternary summer and winter sea-surface temperatures with a statistical error of ~0.5°C. Our function was verified by its application to a sediment core extracted from the western South Atlantic. The downcore reconstruction shows negative anomalies in sea-surface temperatures during the early-mid Holocene and temperatures within the range of modern values during the late Holocene. This pattern is consistent with available reconstructions.
Resumo:
PEDRINI, Aldomar; WESTPHAL, F. S.; LAMBERT, R.. A methodology for building energy modelling and calibration in warm climates. Building And Environment, Australia, n. 37, p.903-912, 2002. Disponível em:
Resumo:
[EN]The work presented in this paper is related to Depth Recovery from Focus The approach starts calibrating focal length of the camera using the Gaussian lens law for the thin lens camera model Two approaches are presented based on the availability of the internal distance of the lens
Resumo:
Resources created at the University of Southampton for the module Remote Sensing for Earth Observation
Resumo:
In urban areas, interchange spacing and the adequacy of design for weaving, merge, and diverge areas can significantly influence available capacity. Traffic microsimulation tools allow detailed analyses of these critical areas in complex locations that often yield results that differ from the generalized approach of the Highway Capacity Manual. In order to obtain valid results, various inputs should be calibrated to local conditions. This project investigated basic calibration factors for the simulation of traffic conditions within an urban freeway merge/diverge environment. By collecting and analyzing urban freeway traffic data from multiple sources, specific Iowa-based calibration factors for use in VISSIM were developed. In particular, a repeatable methodology for collecting standstill distance and headway/time gap data on urban freeways was applied to locations throughout the state of Iowa. This collection process relies on the manual processing of video for standstill distances and individual vehicle data from radar detectors to measure the headways/time gaps. By comparing the data collected from different locations, it was found that standstill distances vary by location and lead-follow vehicle types. Headways and time gaps were found to be consistent within the same driver population and across different driver populations when the conditions were similar. Both standstill distance and headway/time gap were found to follow fairly dispersed and skewed distributions. Therefore, it is recommended that microsimulation models be modified to include the option for standstill distance and headway/time gap to follow distributions as well as be set separately for different vehicle classes. In addition, for the driving behavior parameters that cannot be easily collected, a sensitivity analysis was conducted to examine the impact of these parameters on the capacity of the facility. The sensitivity analysis results can be used as a reference to manually adjust parameters to match the simulation results to the observed traffic conditions. A well-calibrated microsimulation model can enable a higher level of fidelity in modeling traffic behavior and serve to improve decision making in balancing need with investment.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.