24 resultados para Hierarchical task analysis

em Digital Commons at Florida International University


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examined the construct validity of the Choices questionnaire that purported to support the theory of Learning Agility. Specifically, Learning Agility attempts to predict an individual's potential performance in new tasks. The construct validity will be measured by examining the convergent/discriminant validity of the Choices Questionnaire against a cognitive ability measure and two personality measures. The Choices Questionnaire did tap a construct that is unique to the cognitive ability and the personality measures, thus suggesting that this measure may have considerable value in personnel selection. This study also examined the relationship of this pew measure to job performance and job promotability. Results of this study found that the Choices Questionnaire predicted job performance and job promotability above and beyond cognitive ability and personality. Data from 107 law enforcement officers, along with two of their co-workers and a supervisor resulted in a correlation of .08 between Learning Agility and cognitive ability. Learning Agility correlated .07 with Learning Goal Orientation and. 17 with Performance Goal Orientation. Correlations with the Big Five Personality factors ranged from −.06 to. 13 with Conscientiousness and Openness to Experience, respectively. Learning Agility correlated .40 with supervisory ratings of job promotability and correlated .3 7 with supervisory ratings of overall job performance. Hierarchical regression analysis found incremental validity for Learning Agility over cognitive ability and the Big Five factors of personality for supervisory ratings of both promotability and overall job performance. A literature review was completed to integrate the Learning Agility construct into a nomological net of personnel selection research. Additionally, practical applications and future research directions are discussed. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This quantitative study investigated the predictive relationships and interaction between factors such as work-related social behaviors (WRSB), self-determination (SD), person-job congruency (PJC), job performance (JP), job satisfaction (JS), and job retention (JR). A convenience sample of 100 working adults with MR were selected from supported employment agencies. Data were collected using a survey test battery of standardized instruments. The hypotheses were analyzed using three multiple regression analyses to identify significant relationships. Beta weights and hierarchical regression analysis determined the percentage of the predictor variables contribution to the total variance of the criterion variables, JR, JP, and JS. ^ The findings highlight the importance of self-determination skills in predicting job retention, satisfaction, and performance for employees with MR. Consistent with the literature and hypothesized model, there was a predictive relationship between SD, JS and JR. Furthermore, SD and PJC were predictors of JP. SD and JR were predictors of JS. Interestingly, the results indicated no significant relationship between JR and JP, or between JP and JS, or between PJC and JS. This suggests that there is a limited fit between the hypothesized model and the study's findings. However, the theoretical contribution made by this study is that self-determination is a particularly relevant predictor of important work outcomes including JR, JP, and JS. This finding is consistent with Deci's (1992) Self-Determination Theory and Wehmeyer's (1996) argument that SD skills in individuals with disabilities have important consequences for the success in transitioning from school to adult and work life. This study provides job retention strategies that offer rehabilitation and HR professionals a useful structure for understanding and implementing job retention interventions for people with MR. ^ The study concluded that workers with mental retardation who had more self-determination skills were employed longer, more satisfied, and better performers on the job. Also, individuals whose jobs were matched to their interests and abilities (person-job congruency) were better at self-determination skills. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This nonexperimental, correlational study (N = 283) examined the relation among job fit, affective commitment, psychological climate, discretionary effort, intention to turnover, and employee engagement. An internet-based self-report survey battery of six scales were administered to a heterogeneous sampling of organizations from the fields of service, technology, healthcare, retail, banking, nonprofit, and hospitality. Hypotheses were tested through correlational and hierarchical regression analytic procedures. Job fit, affective commitment, and psychological climate were all significantly related to employee engagement and employee engagement was significantly related to both discretionary effort and intention to turnover. For the discretionary effort model, the hierarchical regression analysis results suggested that the employees who reported experiencing a positive psychological climate were more likely to report higher levels of discretionary effort. As for the intention to turnover model, the hierarchical regression analysis results indicated that affective commitment and employee engagement predicted lower levels of an employee’s intention to turnover. The regression beta weights ranged from to .43 to .78, supporting the theoretical, empirical, and practical relevance of understanding the impact of employee engagement on organizational outcomes. Implications for HRD theory, research, and practice are highlighted as possible strategic leverage points for creating conditions that facilitate the development of employee engagement as a means for improving organizational performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fluorescence properties of whole water samples and molecular characteristics of ultrafiltrated dissolved organic matter (UDOM > 1,000 D) such as lignin phenol and neutral sugar compositions and 13C nuclear magnetic resonance (NMR) spectra were determined along a freshwater to marine gradient in Everglades National Park. Furthermore, UDOM samples were categorized by hierarchical cluster analysis based on their pyrolysis gas chromatography/mass spectrometry products. Fluorescence properties suggest that autochthonous DOM leached/exuded from biomass is quantitatively important in this system. 13C NMR spectra showed that UDOM from the oligotrophic Taylor Slough (TS) and Florida Bay (FB) ecosystems has low aromatic C (13% ± 3% for TS; 2% ± 2% for FB) and very high O-alkyl C (54% ± 4% for TS; 75% ± 4% for FB) concentrations. High O-alkyl C concentrations in FB suggest seagrass/phytoplankton communities as dominant sources of UDOM. The amount of neutral sugars was not appreciably different between the TS and FB sites (115 ± 12 mg C g C-1 UDOM) but their concentrations suggest a low level of diagenesis and high production rates of this material in this oligotrophic environment. Total yield of lignin phenols (vanillyl + syringyl phenols) in TS was low (0.20–0.39 mg 100 mg C-1 UDOM) compared with other riverine environments and even lower in FB (0.04–0.07 mg 100 mg C-1 UDOM) and could be a result of photodegradation and/or dilution by other utochthonous DOM. The high O-alkyl and low aromatic nature of this UDOM suggests significant biogenic inputs (as compared with soils) and limited bioavailability in this ecosystem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Database design is a difficult problem for non-expert designers. It is desirable to assist such designers during the problem solving process by means of a knowledge based (KB) system. Although a number of prototype KB systems have been proposed, there are many shortcomings. Firstly, few have incorporated sufficient expertise in modeling relationships, particularly higher order relationships. Secondly, there does not seem to be any published empirical study that experimentally tested the effectiveness of any of these KB tools. Thirdly, problem solving behavior of non-experts, whom the systems were intended to assist, has not been one of the bases for system design. In this project, a consulting system, called CODA, for conceptual database design that addresses the above short comings was developed and empirically validated. More specifically, the CODA system incorporates (a) findings on why non-experts commit errors and (b) heuristics for modeling relationships. Two approaches to knowledge base implementation were used and compared in this project, namely system restrictiveness and decisional guidance (Silver 1990). The Restrictive system uses a proscriptive approach and limits the designer's choices at various design phases by forcing him/her to follow a specific design path. The Guidance system approach, which is less restrictive, involves providing context specific, informative and suggestive guidance throughout the design process. Both the approaches would prevent erroneous design decisions. The main objectives of the study are to evaluate (1) whether the knowledge-based system is more effective than the system without a knowledge-base and (2) which approach to knowledge implementation - whether Restrictive or Guidance - is more effective. To evaluate the effectiveness of the knowledge base itself, the systems were compared with a system that does not incorporate the expertise (Control). An experimental procedure using student subjects was used to test the effectiveness of the systems. The subjects solved a task without using the system (pre-treatment task) and another task using one of the three systems, viz. Control, Guidance or Restrictive (experimental task). Analysis of experimental task scores of those subjects who performed satisfactorily in the pre-treatment task revealed that the knowledge based approach to database design support lead to more accurate solutions than the control system. Among the two KB approaches, Guidance approach was found to lead to better performance when compared to the Control system. It was found that the subjects perceived the Restrictive system easier to use than the Guidance system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For years, researchers and human resources specialists have been searching for predictors of performance as well as for relevant performance dimensions (Barrick & Mount, 1991; Borman & Motowidlo, 1993; Campbell, 1990; Viswesvaran et al., 1996). In 1993, Borman and Motowidlo provided a framework by which traditional predictors such as cognitive ability and the Big Five personality factors predicted two different facets of performance: 1) task performance and 2) contextual performance. A meta-analysis was conducted to assess the validity of this model as well as that of other modified models. The relationships between predictors such as cognitive ability and personality variables and the two outcome variables were assessed. It was determined that even though the two facets of performance may be conceptually different, empirically they overlapped substantially (p= .75). Finally, results show that there is some evidence for cognitive ability as a predictor of both task and contextual performance and conscientiousness as a predictor of both task and contextual performance. The possible mediation of predictor-- criterion relationships was also assessed. The relationship between cognitive ability and contextual performance vanished when task performance was controlled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The microarray technology provides a high-throughput technique to study gene expression. Microarrays can help us diagnose different types of cancers, understand biological processes, assess host responses to drugs and pathogens, find markers for specific diseases, and much more. Microarray experiments generate large amounts of data. Thus, effective data processing and analysis are critical for making reliable inferences from the data. ^ The first part of dissertation addresses the problem of finding an optimal set of genes (biomarkers) to classify a set of samples as diseased or normal. Three statistical gene selection methods (GS, GS-NR, and GS-PCA) were developed to identify a set of genes that best differentiate between samples. A comparative study on different classification tools was performed and the best combinations of gene selection and classifiers for multi-class cancer classification were identified. For most of the benchmarking cancer data sets, the gene selection method proposed in this dissertation, GS, outperformed other gene selection methods. The classifiers based on Random Forests, neural network ensembles, and K-nearest neighbor (KNN) showed consistently god performance. A striking commonality among these classifiers is that they all use a committee-based approach, suggesting that ensemble classification methods are superior. ^ The same biological problem may be studied at different research labs and/or performed using different lab protocols or samples. In such situations, it is important to combine results from these efforts. The second part of the dissertation addresses the problem of pooling the results from different independent experiments to obtain improved results. Four statistical pooling techniques (Fisher inverse chi-square method, Logit method. Stouffer's Z transform method, and Liptak-Stouffer weighted Z-method) were investigated in this dissertation. These pooling techniques were applied to the problem of identifying cell cycle-regulated genes in two different yeast species. As a result, improved sets of cell cycle-regulated genes were identified. The last part of dissertation explores the effectiveness of wavelet data transforms for the task of clustering. Discrete wavelet transforms, with an appropriate choice of wavelet bases, were shown to be effective in producing clusters that were biologically more meaningful. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: (1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; (2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and (3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Each disaster presents itself with a unique set of characteristics that are hard to determine a priori. Thus disaster management tasks are inherently uncertain, requiring knowledge sharing and quick decision making that involves coordination across different levels and collaborators. While there has been an increasing interest among both researchers and practitioners in utilizing knowledge management to improve disaster management, little research has been reported about how to assess the dynamic nature of disaster management tasks, and what kinds of knowledge sharing are appropriate for different dimensions of task uncertainty characteristics. ^ Using combinations of qualitative and quantitative methods, this research study developed the dimensions and their corresponding measures of the uncertain dynamic characteristics of disaster management tasks and tested the relationships between the various dimensions of uncertain dynamic disaster management tasks and task performance through the moderating and mediating effects of knowledge sharing. ^ Furthermore, this research work conceptualized and assessed task uncertainty along three dimensions: novelty, unanalyzability, and significance; knowledge sharing along two dimensions: knowledge sharing purposes and knowledge sharing mechanisms; and task performance along two dimensions: task effectiveness and task efficiency. Analysis results of survey data collected from Miami-Dade County emergency managers suggested that knowledge sharing purposes and knowledge sharing mechanisms moderate and mediate uncertain dynamic disaster management task and task performance. Implications for research and practice as well directions for future research are discussed.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to better understand the study behaviors and habits of university undergraduate students. It was designed to determine whether undergraduate students could be grouped based on their self-reported study behaviors and if any grouping system could be determined, whether group membership was related to students’ academic achievement. A total of 152 undergraduate students voluntarily participated in the current study by completing the Study Behavior Inventory instrument. All participants were enrolled in fall semester of 2010 at Florida International University. The Q factor analysis technique using principal components extraction and a varimax rotation was used in order to examine the participants in relation to each other and to detect a pattern of intercorrelations among participants based on their self-reported study behaviors. The Q factor analysis yielded a two factor structure representing two distinct student types among participants regarding their study behaviors. The first student type (i.e., Factor 1) describes proactive learners who organize both their study materials and study time well. Type 1 students are labeled “Proactive Learners with Well-Organized Study Behaviors”. The second type (i.e., Factor 2) represents students who are poorly organized as well as being very likely to procrastinate. Type 2 students are labeled Disorganized Procrastinators. Hierarchical linear regression was employed to examine the relationship between student type and academic achievement as measured by current grade point averages (GPAs). The results showed significant differences in GPAs between Type 1 and Type 2 students at the .05 significance level. Furthermore, student type was found to be a significant predictor of academic achievement beyond and above students’ attribute variables including sex, age, major, and enrollment status. The study has several implications for educational researchers, practitioners, and policy makers in terms of improving college students' learning behaviors and outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In their dialogue entitled - The Food Service Industry Environment: Market Volatility Analysis - by Alex F. De Noble, Assistant Professor of Management, San Diego State University and Michael D. Olsen, Associate Professor and Director, Division of Hotel, Restaurant & Institutional Management at Virginia Polytechnic Institute and State University, De Noble and Olson preface the discussion by saying: “Hospitality executives, as a whole, do not believe they exist in a volatile environment and spend little time or effort in assessing how current and future activity in the environment will affect their success or failure. The authors highlight potential differences that may exist between executives' perceptions and objective indicators of environmental volatility within the hospitality industry and suggest that executives change these perceptions by incorporating the assumption of a much more dynamic environment into their future strategic planning efforts. Objective, empirical evidence of the dynamic nature of the hospitality environment is presented and compared to several studies pertaining to environmental perceptions of the industry.” That weighty thesis statement presumes that hospitality executives/managers do not fully comprehend the environment in which they operate. The authors provide a contrast, which conventional wisdom would seem to support and satisfy. “Broadly speaking, the operating environment of an organization is represented by its task domain,” say the authors. “This task domain consists of such elements as a firm's customers, suppliers, competitors, and regulatory groups.” These are dynamic actors and the underpinnings of change, say the authors by way of citation. “The most difficult aspect for management in this regard tends to be the development of a proper definition of the environment of their particular firm. Being able to precisely define who the customers, competitors, suppliers, and regulatory groups are within the environment of the firm is no easy task, yet is imperative if proper planning is to occur,” De Noble and Olson further contribute to support their thesis statement. The article is bloated, and that’s not necessarily a bad thing, with tables both survey and empirically driven, to illustrate market volatility. One such table is the Bates and Eldredge outline; Table-6 in the article. “This comprehensive outline…should prove to be useful to most executives in expanding their perception of the environment of their firm,” say De Noble and Olson. “It is, however, only a suggested outline,” they advise. “…risk should be incorporated into every investment decision, especially in a volatile environment,” say the authors. De Noble and Olson close with an intriguing formula to gauge volatility in an environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investors and developers are often faced with the task of determining the worth or value of a real estate entity that presently exists or is proposed for development. This article explains the process for determining the value of a proposed project and, subsequently, the maximum investment dollars the project can cover, while at the same time producing a reasonable return for the investor. A proposed 300-room hotel serves as the real estate entity to be analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is an empirical study whose purpose was to examine the process of innovation adoption as an adaptive response by a public organization and its subunits existing under varying degrees of environmental uncertainty. Meshing organization innovation research and contingency theory to form a theoretical framework, an exploratory case study design was undertaken in a large, metropolitan government located in an area with the fourth highest prevalence rate of HIV/AIDS in the country. A number of environmental and organizational factors were examined for their influence upon decision making in the adoption/non-adoption as well as implementation of any number of AIDS-related policies, practices, and programs.^ The major findings of the study are as follows. For the county government itself (macro level), no AIDS-specific workplace policies have been adopted. AIDS activities (AIDS education, AIDS Task Force, AIDS Coordinator, etc.), adopted county-wide early in the epidemic, have all been abandoned. Worker infection rates, in the aggregate and throughout the epidemic have been small. As a result, absent co-worker conflict (isolated and negligible), no increase in employee health care costs, no litigation regarding discrimination, and no major impact on workforce productivity, AIDS has basically become a non-issue at the strategic core of the organization. At the departmental level, policy adoption decisions varied widely. Here the predominant issue is occupational risk, i.e., both objective as well as perceived. As expected, more AIDS-related activities (policies, practices, and programs) were found in departments with workers known to have significant risk for exposure to the AIDS virus (fire rescue, medical examiner, police, etc.). AIDS specific policies, in the form of OSHA's Bloodborn Pathogen Standard, took place primarily because they were legislatively mandated. Union participation varied widely, although not necessarily based upon worker risk. In several departments, the union was a primary factor bringing about adoption decisions. Additional factors were identified and included organizational presence of AIDS expertise, availability of slack resources, and the existence of a policy champion. Other variables, such as subunit size, centralization of decision making, and formalization were not consistent factors explaining adoption decisions. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^