20 resultados para Mixed network former effect
em Digital Commons at Florida International University
Resumo:
This ex post facto study (N = 209) examined the relationships between employer job strategies and job retention among organizations participating in Florida welfare-to-work network programs and associated the strategies with job retention data to determine best practices. ^ An internet-based self-report survey battery was administered to a heterogeneous sampling of organizations participating in the Florida welfare-to-work network program. Hypotheses were tested through correlational and hierarchical regression analytic procedures. The partial correlation results linked each of the job retention strategies to job retention. Wages, benefits, training and supervision, communication, job growth, work/life balance, fairness and respect were all significantly related to job retention. Hierarchical regression results indicated that the training and supervision variable was the best predictor of job retention in the regression equation. ^ The size of the organization was also a significant predictor of job retention. Large organizations reported higher job retention rates than small organizations. There was no statistical difference between the types of organizations (profit-making and non-profit) and job retention. The standardized betas ranged from to .26 to .41 in the regression equation. Twenty percent of the variance in job retention was explained by the combination of demographic and job retention strategy predictors, supporting the theoretical, empirical, and practical relevance of understanding the association between employer job strategies and job retention outcomes. Implications for adult education and human resource development theory, research, and practice are highlighted as possible strategic leverage points for creating conditions that facilitate the development of job strategies as a means for improving former welfare workers’ job retention.^
Resumo:
As traffic congestion exuberates and new roadway construction is severely constrained because of limited availability of land, high cost of land acquisition, and communities' opposition to the building of major roads, new solutions have to be sought to either make roadway use more efficient or reduce travel demand. There is a general agreement that travel demand is affected by land use patterns. However, traditional aggregate four-step models, which are the prevailing modeling approach presently, assume that traffic condition will not affect people's decision on whether to make a trip or not when trip generation is estimated. Existing survey data indicate, however, that differences exist in trip rates for different geographic areas. The reasons for such differences have not been carefully studied, and the success of quantifying the influence of land use on travel demand beyond employment, households, and their characteristics has been limited to be useful to the traditional four-step models. There may be a number of reasons, such as that the representation of influence of land use on travel demand is aggregated and is not explicit and that land use variables such as density and mix and accessibility as measured by travel time and congestion have not been adequately considered. This research employs the artificial neural network technique to investigate the potential effects of land use and accessibility on trip productions. Sixty two variables that may potentially influence trip production are studied. These variables include demographic, socioeconomic, land use and accessibility variables. Different architectures of ANN models are tested. Sensitivity analysis of the models shows that land use does have an effect on trip production, so does traffic condition. The ANN models are compared with linear regression models and cross-classification models using the same data. The results show that ANN models are better than the linear regression models and cross-classification models in terms of RMSE. Future work may focus on finding a representation of traffic condition with existing network data and population data which might be available when the variables are needed to in prediction.
Resumo:
As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.
Resumo:
The purpose of the study was to measure gains in the development of elementary education teachers’ reading expertise, to determine if there was a differential gain in reading expertise, and last, to examine their perceptions of acquiring reading expertise. This research is needed in the field of teacher education, specifically in the field of reading. A quasi-experimental design with a comparison group using pretest-posttest mixed-method, repeated measures was utilized. Quantitative data analysis measured the development of reading expertise of elementary preservice teachers compared to early childhood preservice teachers; and, was used to examine the differential gains in reading expertise. A multivariate analysis of variance (MANOVA) was conducted on pre- and posttest responses on a Protocol of Questions. Further analysis was conducted on five variables (miscue analysis, fluency analysis, data analysis, inquiry orientation and intelligent action) using a univariate analysis of variance (ANOVA). A one-way ANOVA was carried out on gain scores of the low and middle groups of elementary education preservice teachers. Qualitative data analysis suggested by Merriam (1989) and Miles and Huberman (1994) was used to determine if the elementary education preservice teachers perceived they had acquired the expertise to teach reading. Elementary education preservice teachers who participated in a supervised clinical practicum made significant gains in their development of reading expertise as compared to early childhood preservice teachers who did not make significant gains. Elementary education preservice teachers who were in the low and middle third levels of expertise at pretest demonstrated significant gains in reading expertise. Last, elementary education preservice teachers perceived they had acquired the expertise to teach reading. The study concluded that reading expertise can be developed in elementary education preservice teachers through participation in a supervised clinical practicum. The findings support the idea that preservice teachers who will be teaching reading to elementary students would benefit from a supervised clinical practicum.
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: (1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; (2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and (3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.
Resumo:
We investigated the combined effects of salinity and hydroperiod on seedlings of Rhizophora mangle and Laguncularia racemosa grown under experimental conditions of monoculture and mixed culture by using a simulated tidal system. The objective was to test hypotheses relative to species interactions to either tidal or permanent flooding at salinities of 10 or 40 g/l. Four-month-old seedlings were experimentally manipulated under these environmental conditions in two types of species interactions: (1) seedlings of the same species were grown separately in containers from September 2000 to August 2001 to evaluate intraspecific response and (2) seedlings of each species were mixed in containers to evaluate interspecific, competitive responses from August 2002 to April 2003. Overall, L. racemosa was strongly sensitive to treatment combinations while R. mangle showed little effect. Most plant responses of L. racemosa were affected by both salinity and hydroperiod, with hydroperiod inducing more effects than salinity. Compared to R. mangle, L. racemosa in all treatment combinations had higher relative growth rate, leaf area ratio, specific leaf area, stem elongation, total length of branches, net primary production, and stem height. Rhizophora mangle had higher biomass allocation to roots. Species growth differentiation was more pronounced at low salinity, with few species differences at high salinity under permanent flooding. These results suggest that under low to mild stress by hydroperiod and salinity, L. racemosa exhibits responses that favor its competitive dominance over R. mangle. This advantage, however, is strongly reduced as stress from salinity and hydroperiod increase.
Resumo:
This dissertation explored the capacity of business group diversification to generate value to their affiliates in an institutional environment characterized by the adoption of structural pro-market reforms. In particular, the three empirical essays explored the impact of business group diversification on the internationalization process of their affiliates. ^ The first essay examined the direct effect of business group diversification on firm performance and its moderating effect on the multinationality-performance relationship. It further explored whether such moderating effect varies depending upon whether the focal affiliate is a manufacturing or service firm. The findings suggested that the benefits of business group diversification on firm performance have a threshold, that those benefits are significant at earlier stages of internationalization and that these benefits are stronger for service firms. ^ The second essay studied the capacity of business group diversification to ameliorate the negative effects of the added complexity faced by its affiliates when they internationalized. The essay explored this capacity in different dimensions of international complexity. The results indicated that business group diversification effectively ameliorated the effects of the added international complexity. This positive effect is stronger in the institutional voids rather than the societal complexity dimension. In the former dimension, diversified business groups can use both their non-market resources and previous experience to ameliorate the effects of complexity on firm performance. ^ The last essay explored whether the benefits of business group diversification on the scope-performance relationship varies depending on the level of development of the network of subsidiaries and the region of operation of the focal firm. The results suggested that the benefits of business group diversification are location bound within the region but that they are not related to the level of development of the targeted countries. ^ The three essays use longitudinal analyses on a sample of Latin American firms to test the hypotheses. While the first essay used multilevel models and fix effects models, the last two essays used exclusively fix effects models to assess the impact of business group diversification. In conclusion, this dissertation aimed to explain the capacity of business group diversification to generate value under conditions of institutional change.^
Resumo:
The trend of green consumerism and increased standardization of environmental regulations has driven multinational corporations (MNCs) to seek standardization of environmental practices or at least seek to be associated with such behavior. In fact, many firms are seeking to free ride on this global green movement, without having the actual ecological footprint to substantiate their environmental claims. While scholars have articulated the benefits from such optimization of uniform global green operations, the challenges for MNCs to control and implement such operations are understudied. For firms to translate environmental commitment to actual performance, the obstacles are substantial, particularly for the MNC. This is attributed to headquarters' (HQ) control challenges (1) in managing core elements of the corporate environmental management (CEM) process and specifically matching verbal commitment and policy with ecological performance and by (2) the fact that the MNC operates in multiple markets and the HQ is required to implement policy across complex subsidiary networks consisting of diverse and distant units. Drawing from the literature on HQ challenges of MNC management and control, this study examines (1) how core components of the CEM process impact optimization of global environmental performance (GEP) and then uses network theory to examine how (2) a subsidiary network's dimensions can present challenges to the implementation of green management policies. It presents a framework for CEM which includes (1) MNCs' Verbal environmental commitment, (2) green policy Management which guides standards for operations, (3) actual environmental Performance reflected in a firm's ecological footprint and (4) corporate environmental Reputation (VMPR). Then it explains how an MNC's key subsidiary network dimensions (density, diversity, and dispersion) create challenges that hinder the relationship between green policy management and actual environmental performance. It combines content analysis, multiple regression, and post-hoc hierarchal cluster analysis to study US manufacturing MNCs. The findings support a positive significant effect of verbal environmental commitment and green policy management on actual global environmental performance and environmental reputation, as well as a direct impact of verbal environmental commitment on green policy management. Unexpectedly, network dimensions were not found to moderate the relationship between green management policy and GEP.
Resumo:
The two-photon exchange phenomenon is believed to be responsible for the discrepancy observed between the ratio of proton electric and magnetic form factors, measured by the Rosenbluth and polarization transfer methods. This disagreement is about a factor of three at Q 2 of 5.6 GeV2. The precise knowledge of the proton form factors is of critical importance in understanding the structure of this nucleon. The theoretical models that estimate the size of the two-photon exchange (TPE) radiative correction are poorly constrained. This factor was found to be directly measurable by taking the ratio of the electron-proton and positron-proton elastic scattering cross sections, as the TPE effect changes sign with respect to the charge of the incident particle. A test run of a modified beamline has been conducted with the CEBAF Large Acceptance Spectrometer (CLAS) at Thomas Jefferson National Accelerator Facility. This test run demonstrated the feasibility of producing a mixed electron/positron beam of good quality. Extensive simulations performed prior to the run were used to reduce the background rate that limits the production luminosity. A 3.3 GeV primary electron beam was used that resulted in an average secondary lepton beam of 1 GeV. As a result, the elastic scattering data of both lepton types were obtained at scattering angles up to 40 degrees for Q2 up to 1.5 GeV2. The cross section ratio displayed an &epsis; dependence that was Q2 dependent at smaller Q2 limits. The magnitude of the average ratio as a function of &epsis; was consistent with the previous measurements, and the elastic (Blunden) model to within the experimental uncertainties. Ultimately, higher luminosity is needed to extend the data range to lower &epsis; where the TPE effect is predicted to be largest.
Resumo:
With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.
Resumo:
The purpose of this study was to determine the effects of participating in an existing study skills course, developed for use with a general college population, on the study strategies and attitudes of college students with learning disabilities. This study further investigated whether there would be differential effectiveness for segregated and mainstreamed sections of the course.^ The sample consisted of 42 students with learning disabilities attending a southeastern university. Students were randomly assigned to either a segregated or mainstreamed section of the study skills course. In addition, a control group consisted of students with learning disabilities who received no study skills instruction.^ All subjects completed the Learning and Study Strategies Inventory (LASSI) before and after the study skills course. The subjects in the segregated group showed significant improvement on six of the 10 scales of the LASSI: Time Management, Concentration, Information Processing, Selecting Main Ideas, Study Aids, and Self Testing. Subjects in the mainstreamed section showed significant improvement on five scales: Anxiety, Selecting Main Ideas, Study Aids, Self Testing, and Test Strategies. The subjects in the control group did not significantly improve on any of the scales.^ This study showed that college students with learning disabilities improved their study strategies and attitudes by participating in a study skills course designed for a general student population. Further, these students benefitted whether by taking the course only with other students with learning disabilities, or by taking the course in a mixed group of students with or without learning disabilities. These results have important practical implications in that it appears that colleges can use existing study skills courses without having to develop special courses and schedules of course offerings targeted specifically for students with learning disabilities. ^
Resumo:
English has been taught as a core and compulsory subject in China for decades. Recently, the demand for English in China has increased dramatically. China now has the world's largest English-learning population. The traditional English-teaching method cannot continue to be the only approach because it merely focuses on reading, grammar and translation, which cannot meet English learners and users' needs (i.e., communicative competence and skills in speaking and writing). ^ This study was conducted to investigate if the Picture-Word Inductive Model (PWIM), a new pedagogical method using pictures and inductive thinking, would benefit English learners in China in terms of potential higher output in speaking and writing. With the gauge of Cognitive Load Theory (CLT), specifically, its redundancy effect, I investigated whether processing words and a picture concurrently would present a cognitive overload for English learners in China. ^ I conducted a mixed methods research study. A quasi-experiment (pretest, intervention for seven weeks, and posttest) was conducted using 234 students in four groups in Lianyungang, China (58 fourth graders and 57 seventh graders as an experimental group with PWIM and 59 fourth graders and 60 seventh graders as a control group with the traditional method). No significant difference in the effects of PWIM was found on vocabulary acquisition based on grade levels. Observations, questionnaires with open-ended questions, and interviews were deployed to answer the three remaining research questions. A few students felt cognitively overloaded when they encountered too many writing samples, too many new words at one time, repeated words, mismatches between words and pictures, and so on. Many students listed and exemplified numerous strengths of PWIM, but a few mentioned weaknesses of PWIM. The students expressed the idea that PWIM had a positive effect on their English teaching. ^ As integrated inferences, qualitative findings were used to explain the quantitative results that there were no significant differences of the effects of the PWIM between the experimental and control groups in both grade levels, from four contextual aspects: time constraints on PWIM implementation, teachers' resistance, how to use PWIM and PWIM implemented in a classroom over 55 students.^
Resumo:
As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: 1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; 2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and 3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.
Resumo:
The purpose of this study was to analyze the network performance by observing the effect of varying network size and data link rate on one of the most commonly found network configurations. Computer networks have been growing explosively. Networking is used in every aspect of business, including advertising, production, shipping, planning, billing, and accounting. Communication takes place through networks that form the basis of transfer of information. The number and type of components may vary from network to network depending on several factors such as requirement and actual physical placement of the networks. There is no fixed size of the networks and they can be very small consisting of say five to six nodes or very large consisting of over two thousand nodes. The varying network sizes make it very important to study the network performance so as to be able to predict the functioning and the suitability of the network. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected significantly due to the increase in the size of the network and that there exists a correlation between the various parameters and the size of the network. These variations are not only dependent on the magnitude of the change in the actual physical area of the network but also on the data link rate used to connect the various components of the network.