970 resultados para Statistical performance indexes
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
Colleges base their admission decisions on a number of factors to determine which applicants have the potential to succeed. This study utilized data for students that graduated from Florida International University between 2006 and 2012. Two models were developed (one using SAT as the principal explanatory variable and the other using ACT as the principal explanatory variable) to predict college success, measured using the student’s college grade point average at graduation. Some of the other factors that were used to make these predictions were high school performance, socioeconomic status, major, gender, and ethnicity. The model using ACT had a higher R^2 but the model using SAT had a lower mean square error. African Americans had a significantly lower college grade point average than graduates of other ethnicities. Females had a significantly higher college grade point average than males.
Resumo:
Archival research was conducted on the inception of preemployment psychological testing, as part of the background screening process, to select police officers for a local police department. Various issues and incidents were analyzed to help explain why this police department progressed from an abbreviated version of a psychological battery, to a much more sophisticated and comprehensive set of instruments. While doubts about psychological exams do exist, research has shown that many are valid and reliable in predicting job performance of police candidates. During a three year period, a police department hired 162 candidates (133 males and 29 females) who received "acceptable" psychological ratings and 71 candidates (58 males and 13 females) who received "marginal" psychological ratings. A document analysis consisted of variables that have been identified as job performance indicators which police psychological testing tries to predict, and "screen in" or "screen out" appropriate applicants. The areas of focus comprised the 6-month police academy, the 4-month Field Training Officer (FTO) Program, the remaining probationary period, and yearly performance up to five years of employment. Specific job performance variables were the final academy grade average, supervisors' evaluation ratings, reprimands, commendations, awards, citizen complaints, time losses, sick time usage, reassignments, promotions, and separations. A causal-comparative research design was used to determine if there were significant statistical differences in these job performance variables between police officers with "acceptable" psychological ratings and police officers with "marginal" psychological ratings. The results of multivariate analyses of variance, t-tests, and chi-square procedures as applicable, showed no significant differences between the two groups on any of the job performance variables.
Resumo:
To contribute in the performance of policies and strategies formulated by development agencies, indexes have been created in anticipation of expressing the multiple dimensions of water resources in an easily interpretable form. Use of Hydro Poverty Index ( WPI) is spreading worldwide , with the same formed by the combination of sub - indices Resource, access, capacity , use and environment. S ome critics a s to its formation have emerged, a mong them stands out the allo cation of weights of sub - indexes , made by an arbitrary process attributing subjectivity to the selection criteria. By involving statistical analysis, when considering the characteristics of the variables generated by the Principal Component Analysis (PCA), it turns out that it is able to solve this problem. The objective of this study is to compare the results of the original WPI with content generated by Principal Com ponent Analysis (PCA) for the indicati on of the weights of sub - indec es applicable in the Seridó River hydrographic Basin . We conclude that the use of Principal Component Analysis in the allocation of weights of Water Poverty Index has identified the sub - indices Resources, Access and Environment are the most representative for the river basin Seridó , and that this new index, WPI' , presented the most comprehensive ranges of values , allowing more easily identify disparities among municipalities. In addition, t he evaluation of the sub - indec es in the study area has great potential to inform the decision - maker in the management of water resources, the most critical locations and deserve greater investments in the aspects analyzed, as the index itself can not cap ture this information.
Resumo:
The main focus of this research is to design and develop a high performance linear actuator based on a four bar mechanism. The present work includes the detailed analysis (kinematics and dynamics), design, implementation and experimental validation of the newly designed actuator. High performance is characterized by the acceleration of the actuator end effector. The principle of the newly designed actuator is to network the four bar rhombus configuration (where some bars are extended to form an X shape) to attain high acceleration. Firstly, a detailed kinematic analysis of the actuator is presented and kinematic performance is evaluated through MATLAB simulations. A dynamic equation of the actuator is achieved by using the Lagrangian dynamic formulation. A SIMULINK control model of the actuator is developed using the dynamic equation. In addition, Bond Graph methodology is presented for the dynamic simulation. The Bond Graph model comprises individual component modeling of the actuator along with control. Required torque was simulated using the Bond Graph model. Results indicate that, high acceleration (around 20g) can be achieved with modest (3 N-m or less) torque input. A practical prototype of the actuator is designed using SOLIDWORKS and then produced to verify the proof of concept. The design goal was to achieve the peak acceleration of more than 10g at the middle point of the travel length, when the end effector travels the stroke length (around 1 m). The actuator is primarily designed to operate in standalone condition and later to use it in the 3RPR parallel robot. A DC motor is used to operate the actuator. A quadrature encoder is attached with the DC motor to control the end effector. The associated control scheme of the actuator is analyzed and integrated with the physical prototype. From standalone experimentation of the actuator, around 17g acceleration was achieved by the end effector (stroke length was 0.2m to 0.78m). Results indicate that the developed dynamic model results are in good agreement. Finally, a Design of Experiment (DOE) based statistical approach is also introduced to identify the parametric combination that yields the greatest performance. Data are collected by using the Bond Graph model. This approach is helpful in designing the actuator without much complexity.
Resumo:
Dynamic positron emission tomography (PET) imaging can be used to track the distribution of injected radio-labelled molecules over time in vivo. This is a powerful technique, which provides researchers and clinicians the opportunity to study the status of healthy and pathological tissue by examining how it processes substances of interest. Widely used tracers include 18F-uorodeoxyglucose, an analog of glucose, which is used as the radiotracer in over ninety percent of PET scans. This radiotracer provides a way of quantifying the distribution of glucose utilisation in vivo. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue function. As the residue represents the amount of tracer remaining in the tissue, this can be thought of as a survival function; these functions been examined in great detail by the statistics community. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as ow, ux and volume of distribution. This thesis presents a Markov chain formulation of blood tissue exchange and explores how this relates to established compartmental forms. A nonparametric approach to the estimation of the residue is examined and the improvement in this model relative to compartmental model is evaluated using simulations and cross-validation techniques. The reference distribution of the test statistics, generated in comparing the models, is also studied. We explore these models further with simulated studies and an FDG-PET dataset from subjects with gliomas, which has previously been analysed with compartmental modelling. We also consider the performance of a recently proposed mixture modelling technique in this study.
Resumo:
To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.
Resumo:
To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.
Resumo:
Social media tools are increasingly popular in Computer Supported Collaborative Learning and the analysis of students' contributions on these tools is an emerging research direction. Previous studies have mainly focused on examining quantitative behavior indicators on social media tools. In contrast, the approach proposed in this paper relies on the actual content analysis of each student's contributions in a learning environment. More specifically, in this study, textual complexity analysis is applied to investigate how student's writing style on social media tools can be used to predict their academic performance and their learning style. Multiple textual complexity indices are used for analyzing the blog and microblog posts of 27 students engaged in a project-based learning activity. The preliminary results of this pilot study are encouraging, with several indexes predictive of student grades and/or learning styles.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
The main objective of this work was to develop a novel dimensionality reduction technique as a part of an integrated pattern recognition solution capable of identifying adulterants such as hazelnut oil in extra virgin olive oil at low percentages based on spectroscopic chemical fingerprints. A novel Continuous Locality Preserving Projections (CLPP) technique is proposed which allows the modelling of the continuous nature of the produced in-house admixtures as data series instead of discrete points. The maintenance of the continuous structure of the data manifold enables the better visualisation of this examined classification problem and facilitates the more accurate utilisation of the manifold for detecting the adulterants. The performance of the proposed technique is validated with two different spectroscopic techniques (Raman and Fourier transform infrared, FT-IR). In all cases studied, CLPP accompanied by k-Nearest Neighbors (kNN) algorithm was found to outperform any other state-of-the-art pattern recognition techniques.
Resumo:
Innovation is a strategic necessity for the survival of today’s organizations. The wide recognition of innovation as a competitive necessity, particularly in dynamic market environments, makes it an evergreen domain for research. This dissertation deals with innovation in small Information Technology (IT) firms in India. The IT industry in India has been a phenomenal success story of the last three decades, and is today facing a crucial phase in its history characterized by the need for fundamental changes in strategies, driven by innovation. This study, while motivated by the dynamics of changing times, importantly addresses the research gap on small firm innovation in Indian IT.This study addresses three main objectives: (a) drivers of innovation in small IT firms in India (b) impact of innovation on firm performance (c) variation in the extent of innovation adoption in small firms. Product and process innovation were identified as the two most contextually relevant types of innovation for small IT firms. The antecedents of innovation were identified as Intellectual Capital, Creative Capability, Top Management Support, Organization Learning Capability, Customer Involvement, External Networking and Employee Involvement.Survey method was adopted for data collection and the study unit was the firm. Surveys were conducted in 2014 across five South Indian cities. Small firm was defined as one with 10-499 employees. Responses from 205 firms were chosen for analysis. Rigorous statistical analysis was done to generate meaningful insights. The set of drivers of product innovation (Intellectual Capital, Creative Capability, Top Management Support, Customer Involvement, External Networking, and Employee Involvement)were different from that of process innovation (Creative Capability, Organization Learning Capability, External Networking, and Employee Involvement). Both product and process innovation had strong impact on firm performance. It was found that firms that adopted a combination of product innovation and process innovation had the highest levels of firm performance. Product innovation and process innovation fully mediated the relationship between all the seven antecedents and firm performance The results of this study have several important theoretical and practical implications. To the best of the researcher’s knowledge, this is the first time that an empirical study of firm level innovation of this kind has been undertaken in India. A measurement model for product and process innovation was developed, and the drivers of innovation were established statistically. Customer Involvement, External Networking and Employee Involvement are elements of Open Innovation, and all three had strong association with product innovation, and the latter twohad strong association with process innovation. The results showed that proclivity for Open Innovation is healthy in the Indian context. Practical implications have been outlined along how firms can organize themselves for innovation, the human talent for innovation, the right culture for innovation and for open innovation. While some specific examples of possible future studies have been recommended, the researcher believes that the study provides numerous opportunities to further this line of enquiry.
Resumo:
This work presents a computational, called MOMENTS, code developed to be used in process control to determine a characteristic transfer function to industrial units when radiotracer techniques were been applied to study the unit´s performance. The methodology is based on the measuring the residence time distribution function (RTD) and calculate the first and second temporal moments of the tracer data obtained by two scintillators detectors NaI positioned to register a complete tracer movement inside the unit. Non linear regression technique has been used to fit various mathematical models and a statistical test was used to select the best result to the transfer function. Using the code MOMENTS, twelve different models can be used to fit a curve and calculate technical parameters to the unit.
Resumo:
The quality of a heuristic solution to a NP-hard combinatorial problem is hard to assess. A few studies have advocated and tested statistical bounds as a method for assessment. These studies indicate that statistical bounds are superior to the more widely known and used deterministic bounds. However, the previous studies have been limited to a few metaheuristics and combinatorial problems and, hence, the general performance of statistical bounds in combinatorial optimization remains an open question. This work complements the existing literature on statistical bounds by testing them on the metaheuristic Greedy Randomized Adaptive Search Procedures (GRASP) and four combinatorial problems. Our findings confirm previous results that statistical bounds are reliable for the p-median problem, while we note that they also seem reliable for the set covering problem. For the quadratic assignment problem, the statistical bounds has previously been found reliable when obtained from the Genetic algorithm whereas in this work they found less reliable. Finally, we provide statistical bounds to four 2-path network design problem instances for which the optimum is currently unknown.