19 resultados para Test data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.