14 resultados para computer-aided qualitative data analysis software
em Digital Commons at Florida International University
Resumo:
This study examined the motivation of college and university faculty to implement service-learning into their traditional courses. The benefits derived by faculty, as well as those issues of maintenance, including supports and/or obstacles, were also investigated in relation to their impact on motivation. The focus was on generating theory from the emerging data. ^ Data were collected from interviews with 17 faculty teaching courses that included a component of service-learning. A maximum variation sampling of participants from six South Florida colleges and universities was utilized. Faculty participants represented a wide range of academic disciplines, faculty ranks, years of experience in teaching and using service-learning as well as gender and ethnic diversity. For data triangulation, a focus group with eight additional college faculty was conducted and documents, including course syllabi and institutional service-learning handbooks, collected during the interviews were examined. The interviews were transcribed and coded using traditional methods as well as with the assistance of the computerized assisted qualitative data analysis software, Atlas.ti. The data were organized into five major categories with themes and sub-themes emerging for each. ^ While intrinsic or personal factors along with extrinsic factors all serve to influence faculty motivation, the study's findings revealed that the primary factors influencing faculty motivation to adopt service-learning were those that were intrinsic or personal in nature. These factors included: (a) past experiences, (b) personal characteristics including the value of serving, (c) involvement with community service, (d) interactions and relationships with peers, (e) benefits to students, (f) benefits to teaching, and (g) perceived career benefits. Implications and recommendations from the study encompass suggestions for administrators in higher education institutions for supporting and encouraging faculty adoption of service-learning including a well developed infrastructure as well as incentives, particularly during the initial implementation period, rewards providing recognition for the academic nature of service-learning and support for the development of peer relationships among service-learning faculty. ^
Resumo:
This study explored the strategies that community-based, consumer-focused advocacy, alternative service organizations (ASOs), implemented to adapt to the changes in the nonprofit funding environment (Oliver & McShane, 1979; Perlmutter, 1988a, 1994). It is not clear as to the extent to which current funding trends have influenced ASOs as little empirical research has been conducted in this area (Magnus, 2001; Marquez, 2003; Powell, 1986). ^ This study used a qualitative research design to investigate strategies implemented by these organizations to adapt to changes such as decreasing government, foundation, and corporate funding and an increasing number of nonprofit organizations. More than 20 community informants helped to identify, locate, and provide information about ASOs. Semi-structured interviews were conducted with a sample of 30 ASO executive directors from diverse organizations in Miami-bade and Broward Counties, in South Florida. ^ Data analysis was facilitated by the use of ATLAS.ti, version 5, a qualitative data analysis computer software program designed for grounded theory research. This process generated five major themes: Funding Environment; Internal Structure; Strategies for Survival; Sustainability; and Committing to the Cause, Mission, and Vision. ^ The results indicate that ASOs are struggling to survive financially by cutting programs, decreasing staff, and limiting service to consumers. They are also exploring ways to develop fundraising strategies; for example, increasing the number of proposals written for grants, focusing on fund development, and establishing for-profit ventures. Even organizations that state that they are currently financially stable are concerned about their financial vulnerability. There is little flexibility or cushioning to adjust to "funding jolts." The fear of losing current funding levels and being placed in a tenuous financial situation is a constant concern for these ASOs. ^ Further data collected from the self-administered Funding Checklist and demographic forms were coded and analyzed using Statistical Package for the Social Sciences (SPSS). Descriptive information and frequencies generated findings regarding the revenue, staff compliment, use of volunteers and fundraising consultants, and fundraising practices. The study proposes a model of funding relationships and presents implications for social work practice, and policy, along with recommendations for future research. ^
Resumo:
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.
Resumo:
One of the major problems in the analysis of beams with Moment of Inertia varying along their length, is to find the Fixed End Moments, Stiffness, and Carry-Over Factors. In order to determine Fixed End Moments, it is necessary to consider the non-prismatic member as integrated by a large number of small sections with constant Moment of Inertia, and to find the M/EI values for each individual section. This process takes a lot of time from Designers and Structural Engineers. The object of this thesis is to design a computer program to simplify this repetitive process, obtaining rapidly and effectively the Final Moments and Shears in continuous non-prismatic Beams. For this purpose the Column Analogy and the Moment Distribution Methods of Professor Hardy Cross have been utilized as the principles toward the methodical computer solutions. The program has been specifically designed to analyze continuous beams of a maximum of four spans of any length, integrated by symmetrical members with rectangular cross sections and with rectilinear variation of the Moment of Inertia. Any load or combination of uniform and concentrated loads must be considered. Finally sample problems will be solved with the new Computer Program and with traditional systems, to determine the accuracy and applicability of the Program.
Resumo:
The purpose of the study was to measure gains in the development of elementary education teachers’ reading expertise, to determine if there was a differential gain in reading expertise, and last, to examine their perceptions of acquiring reading expertise. This research is needed in the field of teacher education, specifically in the field of reading. A quasi-experimental design with a comparison group using pretest-posttest mixed-method, repeated measures was utilized. Quantitative data analysis measured the development of reading expertise of elementary preservice teachers compared to early childhood preservice teachers; and, was used to examine the differential gains in reading expertise. A multivariate analysis of variance (MANOVA) was conducted on pre- and posttest responses on a Protocol of Questions. Further analysis was conducted on five variables (miscue analysis, fluency analysis, data analysis, inquiry orientation and intelligent action) using a univariate analysis of variance (ANOVA). A one-way ANOVA was carried out on gain scores of the low and middle groups of elementary education preservice teachers. Qualitative data analysis suggested by Merriam (1989) and Miles and Huberman (1994) was used to determine if the elementary education preservice teachers perceived they had acquired the expertise to teach reading. Elementary education preservice teachers who participated in a supervised clinical practicum made significant gains in their development of reading expertise as compared to early childhood preservice teachers who did not make significant gains. Elementary education preservice teachers who were in the low and middle third levels of expertise at pretest demonstrated significant gains in reading expertise. Last, elementary education preservice teachers perceived they had acquired the expertise to teach reading. The study concluded that reading expertise can be developed in elementary education preservice teachers through participation in a supervised clinical practicum. The findings support the idea that preservice teachers who will be teaching reading to elementary students would benefit from a supervised clinical practicum.
Resumo:
Convergence among treatment, prevention, and developmental intervention approaches has led to the recognition of the need for evaluation models and research designs that employ a full range of evaluation information to provide an empirical basis for enhancing the efficiency, efficacy, and effectiveness of prevention and positive development interventions. This study reports an investigation of a positive youth development program using an Outcome Mediation Cascade (OMC) evaluation model, an integrated model for evaluating the empirical intersection between intervention and developmental processes. The Changing Lives Program (CLP) is a community supported positive youth development intervention implemented in a practice setting as a selective/indicated program for multi-ethnic, multi-problem at risk youth in urban alternative high schools. This study used a Relational Data Analysis integration of quantitative and qualitative data analysis strategies, including the use of both fixed and free response measures and a structural equation modeling approach, to construct and evaluate the hypothesized OMC model. Findings indicated that the hypothesized model fit the data (χ2 (7) = 6.991, p = .43; RMSEA = .00; CFI = 1.00; WRMR = .459). Findings also provided preliminary evidence consistent with the hypothesis that in addition to having effects on targeted positive outcomes, PYD interventions are likely to have progressive cascading effects on untargeted problem outcomes that operate through effects on positive outcomes. Furthermore, the general pattern of findings suggested the need to use methods capable of capturing both quantitative and qualitative change in order to increase the likelihood of identifying more complete theory informed empirically supported models of developmental intervention change processes.
Resumo:
The purpose of this ethnographic study was to describe and explain the congruency of psychological preferences identified by the Myers-Briggs Type Indicator (MBTI) and the human resource development (HRD) role of instructor/facilitator. This investigation was conducted with 23 HRD professionals who worked in the Miami, Florida area as instructors/facilitators with adult learners in job-related contexts.^ The study was conducted using qualitative strategies of data collection and analysis. The research participants were selected through a purposive sampling strategy. Data collection strategies included: (a) administration and scoring of the MBTI, Form G, (b) open-ended and semi-structured interviews, (c) participant observations of the research subjects at their respective work sites and while conducting training sessions, (d) field notes, and (e) contact summary sheets to record field research encounters. Data analysis was conducted with the use of a computer program for qualitative analysis called FolioViews 3.1 for Windows. This included: (a) coding of transcribed interviews and field notes, (b) theme analysis, (c) memoing, and (d) cross-case analysis.^ The three major themes that emerged in relation to the congruency of psychological preferences and the role of instructor/facilitator were: (1) designing and preparing instruction/facilitation, (2) conducting training and managing group process, and (3) interpersonal relations and perspectives among instructors/facilitators.^ The first two themes were analyzed through the combination of the four Jungian personality functions. These combinations are: sensing-thinking (ST), sensing-feeling (SF), intuition-thinking (NT), and intuition-feeling (NF). The third theme was analyzed through the combination of the attitudes or energy focus and the judgment function. These combinations are: extraversion-thinking (ET), extraversion-feeling (EF), introversion-thinking (IT), and introversion-feeling (IF).^ A last area uncovered by this ethnographic study was the influence exerted by a training and development culture on the instructor/facilitator role. This professional culture is described and explained in terms of the shared values and expectations reported by the study respondents. ^
Resumo:
The purpose of this study was to document and critically analyze the lived experience of selected nursing staff developers in the process of moving toward a new model for hospital nursing education. Eleven respondents were drawn from a nation-wide population of about two hundred individuals involved in nursing staff development. These subjects were responsible for the implementation of the Performance Based Development System (PBDS) in their institutions.^ A purposive, criterion-based sampling technique was used with respondents being selected according to size of hospital, primary responsibility for orchestration of the change, influence over budgetary factors and managerial responsibility for PBDS. Data were gathered by the researcher through both in-person and telephone interviews. A semi-structured interview guide, designed by the researcher was used, and respondents were encouraged to amplify on their recollections as desired. Audiotapes were transcribed and resulting computer files were analyzed using the program "Martin". Answers to interview questions were compiled and reported across cases. The data was then reviewed a second time and interpreted for emerging themes and patterns.^ Two types of verification were used in the study. Internal verification was done through interview transcript review and feedback by respondents. External verification was done through review and feedback on data analysis by readers who were experienced in management of staff development departments.^ All respondents were female, so Gilligan's concept of the "ethic of care" was examined as a decision making strategy. Three levels of caring which influenced decision making were found. They were caring: (a) for the organization, (b) for the employee, and (c) for the patient. The four existentials of the lived experience, relationality, corporeality, temporality and spatiality were also examined to reveal the everydayness of making change. ^
Resumo:
An Automatic Vehicle Location (AVL) system is a computer-based vehicle tracking system that is capable of determining a vehicle's location in real time. As a major technology of the Advanced Public Transportation System (APTS), AVL systems have been widely deployed by transit agencies for purposes such as real-time operation monitoring, computer-aided dispatching, and arrival time prediction. AVL systems make a large amount of transit performance data available that are valuable for transit performance management and planning purposes. However, the difficulties of extracting useful information from the huge spatial-temporal database have hindered off-line applications of the AVL data. ^ In this study, a data mining process, including data integration, cluster analysis, and multiple regression, is proposed. The AVL-generated data are first integrated into a Geographic Information System (GIS) platform. The model-based cluster method is employed to investigate the spatial and temporal patterns of transit travel speeds, which may be easily translated into travel time. The transit speed variations along the route segments are identified. Transit service periods such as morning peak, mid-day, afternoon peak, and evening periods are determined based on analyses of transit travel speed variations for different times of day. The seasonal patterns of transit performance are investigated by using the analysis of variance (ANOVA). Travel speed models based on the clustered time-of-day intervals are developed using important factors identified as having significant effects on speed for different time-of-day periods. ^ It has been found that transit performance varied from different seasons and different time-of-day periods. The geographic location of a transit route segment also plays a role in the variation of the transit performance. The results of this research indicate that advanced data mining techniques have good potential in providing automated techniques of assisting transit agencies in service planning, scheduling, and operations control. ^
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Resumo:
During the past three decades, the use of roundabouts has increased throughout the world due to their greater benefits in comparison with intersections controlled by traditional means. Roundabouts are often chosen because they are widely associated with low accident rates, lower construction and operating costs, and reasonable capacities and delay. ^ In the planning and design of roundabouts, special attention should be given to the movement of pedestrians and bicycles. As a result, there are several guidelines for the design of pedestrian and bicycle treatments at roundabouts that increase the safety of both pedestrians and bicyclists at existing and proposed roundabout locations. Different design guidelines have differing criteria for handling pedestrians and bicyclists at roundabout locations. Although all of the investigated guidelines provide better safety (depending on the traffic conditions at a specific location), their effects on the performance of the roundabout have not been examined yet. ^ Existing roundabout analysis software packages provide estimates of capacity and performance characteristics. This includes characteristics such as delay, queue lengths, stop rates, effects of heavy vehicles, crash frequencies, and geometric delays, as well as fuel consumption, pollutant emissions and operating costs for roundabouts. None of these software packages, however, are capable of determining the effects of various pedestrian crossing locations, nor the effect of different bicycle treatments on the performance of roundabouts. ^ The objective of this research is to develop simulation models capable of determining the effect of various pedestrian and bicycle treatments at single-lane roundabouts. To achieve this, four models were developed. The first model simulates a single-lane roundabout without bicycle and pedestrian traffic. The second model simulates a single-lane roundabout with a pedestrian crossing and mixed flow bicyclists. The third model simulates a single-lane roundabout with a combined pedestrian and bicycle crossing, while the fourth model simulates a single-lane roundabout with a pedestrian crossing and a bicycle lane at the outer perimeter of the roundabout for the bicycles. Traffic data was collected at a modern roundabout in Boca Raton, Florida. ^ The results of this effort show that installing a pedestrian crossing on the roundabout approach will have a negative impact on the entry flow, while the downstream approach will benefit from the newly created gaps by pedestrians. Also, it was concluded that a bicycle lane configuration is more beneficial for all users of the roundabout instead of the mixed flow or combined crossing. Installing the pedestrian crossing at one-car length is more beneficial for pedestrians than two- and three-car lengths. Finally, it was concluded that the effect of the pedestrian crossing on the vehicle queues diminishes as the distance between the crossing and the roundabout increases. ^
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.