922 resultados para Error Correction Coding, Error Resilience, MPEG-4, Video Coding
Resumo:
All social surveys suffer from different types of errors, of which one of the most studied is non-response bias. Non-response bias is a systematic error that occurs because individuals differ in their accessibility and propensity to participate in a survey according to their own characteristics as well as those from the survey itself. The extent of the problem heavily depends on the correlation between response mechanisms and key survey variables. However, non-response bias is difficult to measure or to correct for due to the lack of relevant data about the whole target population or sample. In this paper, non-response follow-up surveys are considered as a possible source of information about non-respondents. Non-response follow-ups, however, suffer from two methodological issues: they themselves operate through a response mechanism that can cause potential non-response bias, and they pose a problem of comparability of measure, mostly because the survey design differs between main survey and non-response follow-up. In order to detect possible bias, the survey variables included in non-response surveys have to be related to the mechanism of participation, but not be sensitive to measurement effects due to the different designs. Based on accumulated experience of four similar non-response follow-ups, we studied the survey variables that fulfill these conditions. We differentiated socio-demographic variables that are measurement-invariant but have a lower correlation with non-response and variables that measure attitudes, such as trust, social participation, or integration in the public sphere, which are more sensitive to measurement effects but potentially more appropriate to account for the non-response mechanism. Our results show that education level, work status, and living alone, as well as political interest, satisfaction with democracy, and trust in institutions are pertinent variables to include in non-response follow-ups of general social surveys. - See more at: https://ojs.ub.uni-konstanz.de/srm/article/view/6138#sthash.u87EeaNG.dpuf
Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials.
Resumo:
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants" math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a nonnumerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.
Resumo:
Adjusting behavior following the detection of inappropriate actions allows flexible adaptation to task demands and environmental contingencies during goal-directed behaviors. Post-error behavioral adjustments typically consist in adopting more cautious response mode, which manifests as a slowing down of response speed. Although converging evidence involves the dorsolateral prefrontal cortex (DLPFC) in post-error behavioral adjustment, whether and when the left or right DLPFC is critical for post-error slowing (PES), as well as the underlying brain mechanisms, remain highly debated. To resolve these issues, we used single-pulse transcranial magnetic stimulation in healthy human adults to disrupt the left or right DLPFC selectively at various delays within the 30-180ms interval following false alarms commission, while participants preformed a standard visual Go/NoGo task. PES significantly increased after TMS disruption of the right, but not the left DLPFC at 150ms post-FA response. We discuss these results in terms of an involvement of the right DLPFC in reducing the detrimental effects of error detection on subsequent behavioral performance, as opposed to implementing adaptative error-induced slowing down of response speed.
Resumo:
Los análisis de Fourier permiten caracterizar el contorno del diente y obtener una serie de parámetros para un posterior análisis multivariante. Sin embargo, la gran complejidad que presentan algunas formas obliga a determinar el error de medición intrínseco que se produce. El objetivo de este trabajo es aplicar y validar los análisis de Fourier en el estudio de la forma dental del segundo molar inferior (M2) de cuatro especies de primates Hominoidea para explorar la variabilidad morfométrica interespecífica, así como determinar el error de medición a un nivel intra e interobservador. El contorno de la superficie oclusal del diente fue definido digitalmente y con las funciones derivadas del análisis de Fourier se realizaron Análisis Discriminantes y Test de Mantel (correlaciones de Pearson) para determinar las diferencias de forma a partir de las mediciones tomadas. Los resultados indican que el análisis de Fourier muestra la variabilidad de forma en dientes molares en especies de primates Hominoidea. Adicionalmente, los altos niveles de correlación a nivel intra (r>0,9) como interobservador (r>0,7) sugieren que la descripción morfométrica del diente a partir de métodos de Fourier realizados por diferentes observadores puede ser agrupada para posteriores análisis.
Resumo:
Performance standards for Positron emission tomography (PET) were developed to be able to compare systems from different generations and manufacturers. This resulted in the NEMA methodology in North America and the IEC in Europe. In practices, the NEMA NU 2- 2001 is the method of choice today. These standardized methods allow assessment of the physical performance of new commercial dedicated PET/CT tomographs. The point spread in image formation is one of the factors that blur the image. The phenomenon is often called the partial volume effect. Several methods for correcting for partial volume are under research but no real agreement exists on how to solve it. The influence of the effect varies in different clinical settings and it is likely that new methods are needed to solve this problem. Most of the clinical PET work is done in the field of oncology. The whole body PET combined with a CT is the standard investigation today in oncology. Despite the progress in PET imaging technique visualization, especially quantification of small lesions is a challenge. In addition to partial volume, the movement of the object is a significant source of error. The main causes of movement are respiratory and cardiac motions. Most of the new commercial scanners are in addition to cardiac gating, also capable of respiratory gating and this technique has been used in patients with cancer of the thoracic region and patients being studied for the planning of radiation therapy. For routine cardiac applications such as assessment of viability and perfusion only cardiac gating has been used. However, the new targets such as plaque or molecular imaging of new therapies require better control of the cardiac motion also caused by respiratory motion. To overcome these problems in cardiac work, a dual gating approach has been proposed. In this study we investigated the physical performance of a new whole body PET/CT scanner with NEMA standard, compared methods for partial volume correction in PET studies of the brain and developed and tested a new robust method for dual cardiac-respiratory gated PET with phantom, animal and human data. Results from performance measurements showed the feasibility of the new scanner design in 2D and 3D whole body studies. Partial volume was corrected, but there is no best method among those tested as the correction also depends on the radiotracer and its distribution. New methods need to be developed for proper correction. The dual gating algorithm generated is shown to handle dual-gated data, preserving quantification and clearly eliminating the majority of contraction and respiration movement
Resumo:
Cada cert temps hi ha assumptes d’alt interès social dins de la jurisdicció. En els últims anys, coincidint amb la crisi financera internacional, els assumptes relatius a contractes bancaris complexos, sobretot permutes financeres o swaps, han tingut una gran rellevància. En un context de crisi financera internacional, i també nacional, s'han estès el nombre de demandes dirigides contra bancs i entitats financeres. Són reclamacions en les quals se sol·licita la declaració de nul·litat dels citats contractes, principalment es basa en un error del consentiment, nul·litat que comporta la devolució de les quantitats invertides, de les rentabilitats esperades o de les penalitzacions aplicades davant la resolució anticipada d'us contractes pels clients defraudats en les seves expectatives. Les presents pàgines pretenen un estudi dels litigis sobre SWAPS, principalment dels “Interest Rate Swap”, identificar quins són els contractes bancaris complexes, quines són les normes de consentiment contractual que els regeixen. Respecte dels primers, cal destacar que l'elevat nombre de casos plantejats davant els nostres tribunals no es tradueix en una casuística tan àmplia com seria imaginable. La gran majoria versa sobre les peticions de nul·litat del contracte (total o parcial) realitzades pels clients, al moment en què l'Euribor va descendir, i que allò que molts havien contractat com un segur de cobertura enfront dels elevats tipus d'interès que havien de pagar per les seves hipoteques, veien com conforme als pactes en el contracte, havien de satisfer al seu contrapart (una entitat de crèdit) una liquidació.
Resumo:
The directional consistency and skew-symmetry statistics have been proposed as global measurements of social reciprocity. Although both measures can be useful for quantifying social reciprocity, researchers need to know whether these estimators are biased in order to assess descriptive results properly. That is, if estimators are biased, researchers should compare actual values with expected values under the specified null hypothesis. Furthermore, standard errors are needed to enable suitable assessment of discrepancies between actual and expected values. This paper aims to derive some exact and approximate expressions in order to obtain bias and standard error values for both estimators for round-robin designs, although the results can also be extended to other reciprocal designs.
Resumo:
Entrevista con Manuel Villegas
Resumo:
This study investigated the surface hardening of steels via experimental tests using a multi-kilowatt fiber laser as the laser source. The influence of laser power and laser power density on the hardening effect was investigated. The microhardness analysis of various laser hardened steels was done. A thermodynamic model was developed to evaluate the thermal process of the surface treatment of a wide thin steel plate with a Gaussian laser beam. The effect of laser linear oscillation hardening (LLOS) of steel was examined. An as-rolled ferritic-pearlitic steel and a tempered martensitic steel with 0.37 wt% C content were hardened under various laser power levels and laser power densities. The optimum power density that produced the maximum hardness was found to be dependent on the laser power. The effect of laser power density on the produced hardness was revealed. The surface hardness, hardened depth and required laser power density were compared between the samples. Fiber laser was briefly compared with high power diode laser in hardening medium-carbon steel. Microhardness (HV0.01) test was done on seven different laser hardened steels, including rolled steel, quenched and tempered steel, soft annealed alloyed steel and conventionally through-hardened steel consisting of different carbon and alloy contents. The surface hardness and hardened depth were compared among the samples. The effect of grain size on surface hardness of ferritic-pearlitic steel and pearlitic-cementite steel was evaluated. In-grain indentation was done to measure the hardness of pearlitic and cementite structures. The macrohardness of the base material was found to be related to the microhardness of the softer phase structure. The measured microhardness values were compared with the conventional macrohardness (HV5) results. A thermodynamic model was developed to calculate the temperature cycle, Ac1 and Ac3 boundaries, homogenization time and cooling rate. The equations were numerically solved with an error of less than 10-8. The temperature distributions for various thicknesses were compared under different laser traverse speed. The lag of the was verified by experiments done on six different steels. The calculated thermal cycle and hardened depth were compared with measured data. Correction coefficients were applied to the model for AISI 4340 steel. AISI 4340 steel was hardened by laser linear oscillation hardening (LLOS). Equations were derived to calculate the overlapped width of adjacent tracks and the number of overlapped scans in the center of the scanned track. The effect of oscillation frequency on the hardened depth was investigated by microscopic evaluation and hardness measurement. The homogeneity of hardness and hardened depth with different processing parameters were investigated. The hardness profiles were compared with the results obtained with conventional single-track hardening. LLOS was proved to be well suitable for surface hardening in a relatively large rectangular area with considerable depth of hardening. Compared with conventional single-track scanning, LLOS produced notably smaller hardened depths while at 40 and 100 Hz LLOS resulted in higher hardness within a depth of about 0.6 mm.
Resumo:
This article deals with a contour error controller (CEC) applied in a high speed biaxial table. It works simultaneously with the table axes controllers, helping them. In the early stages of the investigation, it was observed that its main problem is imprecision when tracking non-linear contours at high speeds. The objectives of this work are to show that this problem is caused by the lack of exactness of the contour error mathematical model and to propose modifications in it. An additional term is included, resulting in a more accurate value of the contour error, enabling the use of this type of motion controller at higher feedrate. The response results from simulated and experimental tests are compared with those of common PID and non-corrected CEC in order to analyse the effectiveness of this controller over the system. The main conclusions are that the proposed contour error mathematical model is simple, accurate, almost insensible to the feedrate and that a 20:1 reduction of the integral absolute contour error is possible.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.