24 resultados para automatic affect analysis
em Digital Commons at Florida International University
Resumo:
The FHA program to insure reverse mortgages has brought additional attention to the use of home equity conversion to increase income to the elderly. Using simulation, this study compares the economic consequences of the FHA reverse mortgage with two alternative conversion vehicles: sale of a remainder interest and sale-leaseback. An FHA insured plan is devised for each vehicle, structured to represent fair substitutes for the FHA mortgage. In addition, the FHA mortgage is adjusted to allow for a 4 percent annual increase in distributions to the homeowner. The viability of each plan for the homeowner, the financial institution and the FHA is investigated using different assumptions for house appreciation, tax rates, and homeowners' initial ages. For the homeowner, the return of each vehicle is compared with the choice of not employing home equity conversion. The study examines the impact of tax and accounting rules on the selection of alternatives. The study investigates the sensitivity of the FHA model to some of its assumptions.^ Although none of the vehicles is Pareato optimal, the study shows that neither the sale of a remainder interest nor the sale-leaseback is a viable alternative vehicle to the homeowner. While each of these vehicles is profitable to the financial institution, the profits are not high enough to transfer benefits to the homeowner and still be workable. The effects of tax rate, house appreciation rate, and homeowner's initial age are surprisingly small. As a general rule, none of these factors materially impact the decision of either the homeowner or the financial institution. Tax and accounting rules were found to have minimal impact on the selection of vehicles. The sensitivity analysis indicates that none of the variables studied alone is likely to materially affect the FHA's profitability. ^
Resumo:
Federal transportation legislation in effect since 1991 was examined to determine outcomes in two areas: (1) The effect of organizational and fiscal structures on the implementation of multimodal transportation infrastructure, and (2) The effect of multimodal transportation infrastructure on sustainability. Triangulation of methods was employed through qualitative analysis (including key informant interviews, focus groups and case studies), as well as quantitative analysis (including one-sample t-tests, regression analysis and factor analysis). ^ Four hypotheses were directly tested: (1) Regions with consolidated government structures will build more multimodal transportation miles: The results of the qualitative analysis do not lend support while the results of the quantitative findings support this hypothesis, possibly due to differences in the definitions of agencies/jurisdictions between the two methods. (2) Regions in which more locally dedicated or flexed funding is applied to the transportation system will build a greater number of multimodal transportation miles: Both quantitative and qualitative research clearly support this hypothesis. (3) Cooperation and coordination, or, conversely, competition will determine the number of multimodal transportation miles: Participants tended to agree that cooperation, coordination and leadership are imperative to achieving transportation goals and objectives, including targeted multimodal miles, but also stressed the importance of political and financial elements in determining what ultimately will be funded and implemented. (4) The modal outcomes of transportation systems will affect the overall health of a region in terms of sustainability/quality of life indicators: Both the qualitative and the quantitative analyses provide evidence that they do. ^ This study finds that federal legislation has had an effect on the modal outcomes of transportation infrastructure and that there are links between these modal outcomes and the sustainability of a region. It is recommended that agencies further consider consolidation and strengthen cooperation efforts and that fiscal regulations are modified to reflect the problems cited in qualitative analysis. Limitations of this legislation especially include the inability to measure sustainability; several measures are recommended. ^
Resumo:
Science professional development, which is fundamental to science education improvement, has been described as being weak and fragmentary. The purpose of this study was to investigate teachers' perceptions of informal science professional development to gain an in-depth understanding of the essence of the phenomenon and related science-teaching dispositions. Based on the frameworks of phenomenology, constructivism, and adult learning theory, the focus was on understanding how the phenomenon was experienced within the context of teachers' everyday world. ^ Data were collected from eight middle-school teachers purposefully selected because they had participated in informal programs during Project TRIPS (Teaching Revitalized Through Informal Programs in Science), a collaboration between the Miami-Dade school district, government agencies (including NASA), and non-profit organizations (including Audubon of Florida). In addition, the teachers experienced hands-on labs offered through universities (including the University of Arizona), field sites, and other agencies. ^ The study employed Seidman's (1991) three-interview series to collect the data. Several methods were used to enhance the credibility of the research, including using triangulation of the data. The interviews were transcribed, color-coded and organized into six themes that emerged from the data. The themes included: (a) internalized content knowledge, (b) correlated hands-on activities, (c) enhanced science-teaching disposition, (d) networking/camaraderie, (e) change of context, and (f) acknowledgment as professionals. The teachers identified supportive elements and constraints related to each theme. ^ The results indicated that informal programs offering experiential learning opportunities strengthened understanding of content knowledge. Teachers implemented hands-on activities that were explicitly correlated to their curriculum. Programs that were conducted in a relaxed context enhanced teachers' science-teaching dispositions. However, a lack of financial and administrative support, perceived safety risks, insufficient reflection time, and unclear itineraries impeded program implementation. The results illustrated how informal educators can use this cohesive model as they develop programs that address the supports and constraints to teachers' science instruction needs. This, in turn, can aid teachers as they strive to provide effective science instruction to students; notions embedded in reforms. Ultimately, this can affect how learners develop the ability to make informed science decisions that impact the quality of life on a global scale. ^
Resumo:
This dissertation addressed two broad problems in international macroeconomics and conflict analysis. The first problem in the first chapter looked at the behavior of exchange rate and its interaction with industry-level tradable goods prices for three countries, USA, UK and Japan. This question has important monetary policy implications. Here, I computed to what extent changes in exchange rate affected prices of consumer, producer, and export goods. I also studied the timing of these changes in these prices. My results, based on thirty-four industrial prices for USA, UK and Japan, supported the view that changes in exchange rates significantly affect prices of industrial and consumer goods. It also provided an insight to the underlying economic process that led to changes in relative prices. ^ In the second chapter, I explored the predictability of future inflation by incorporating shocks to exchange rates and clearly specified the transmission mechanisms that link exchange rates to industry-level consumer and producer prices. Employing a variety of linear and state-of-the-art nonlinear models, I also predicted growth rates of future prices. Comparing levels of inflation obtained from the above approaches showed superiority of the structural model incorporating the exchange rate pass-through effect. ^ The second broad issue addressed in the third chapter of the dissertation investigated the economic motives for conflict, manifested by rebellion and civil war for seventeen Latin American countries. Based on the analytical framework of Garfinkel, Skaperdas and Syropoulos (2004), I employed ordinal regressions and Markov switching for a panel of seventeen countries to identify trade and openness factors responsible for conflict occurrence and intensity. The results suggested that increased trade openness reduced high intensity domestic conflicts but overdependence on agricultural exports, along with a lack of income earning opportunities lead to more conflicts. Thereafter, using the Cox Proportional Hazard model I studied “conflict duration” and found that over-reliance on agricultural exports explained a major part of the length of conflicts in addition to various socio-political factors. ^
Resumo:
The rate of fatal crashes in Florida has remained significantly higher than the national average for the last several years. The 2003 statistics from the National Highway Traffic Safety Administration (NHTSA), the latest available, show a fatality rate in Florida of 1.71 per 100 million vehicle-miles traveled compared to the national average of 1.48 per 100 million vehicle-miles traveled. The objective of this research is to better understand the driver, environmental, and roadway factors that affect the probability of injury severity in Florida. ^ In this research, the ordered logit model was used to develop six injury severity models; single-vehicle and two-vehicle crashes on urban freeways and urban principal arterials and two-vehicle crashes at urban signalized and unsignalized intersections. The data used in this research included all crashes that occurred on the state highway system for the period from 2001 to 2003 in the Southeast Florida region, which includes the Miami-Dade, Broward and Palm Beach Counties.^ The results of the analysis indicate that the age group and gender of the driver at fault were significant factors of injury severity risk across all models. The greatest risk of severe injury was observed for the age groups 55 to 65 and 66 and older. A positive association between injury severity and the race of the driver at fault was also found. Driver at fault of Hispanic origin was associated with a higher risk of severe injury for both freeway models and for the two-vehicle crash model on arterial roads. A higher risk of more severe injury crash involvement was also found when an African-American was the at fault driver on two-vehicle crashes on freeways. In addition, the arterial class was also found to be positively associated with a higher risk of severe crashes. Six-lane divided arterials exhibited the highest injury severity risk of all arterial classes. The lowest severe injury risk was found for one way roads. Alcohol involvement by the driver at fault was also found to be a significant risk of severe injury for the single-vehicle crash model on freeways. ^
Resumo:
This dissertation is a study of customer relationship management theory and practice. Customer Relationship Management (CRM) is a business strategy whereby companies build strong relationships with existing and prospective customers with the goal of increasing organizational profitability. It is also a learning process involving managing change in processes, people, and technology. CRM implementation and its ramifications are also not completely understood as evidenced by the high number of failures in CRM implementation in organizations and the resulting disappointments. ^ The goal of this dissertation is to study emerging issues and trends in CRM, including the effect of computer software and the accompanying new management processes on organizations, and the dynamics of the alignment of marketing, sales and services, and all other functions responsible for delivering customers a satisfying experience. ^ In order to understand CRM better a content analysis of more than a hundred articles and documents from academic and industry sources was undertaken using a new methodological twist to the traditional method. An Internet domain name (http://crm.fiu.edu) was created for the purpose of this research by uploading an initial one hundred plus abstracts of articles and documents onto it to form a knowledge database. Once the database was formed a search engine was developed to enable the search of abstracts using relevant CRM keywords to reveal emergent dominant CRM topics. The ultimate aim of this website is to serve as an information hub for CRM research, as well as a search engine where interested parties can enter CRM-relevant keywords or phrases to access abstracts, as well as submit abstracts to enrich the knowledge hub. ^ Research questions were investigated and answered by content analyzing the interpretation and discussion of dominant CRM topics and then amalgamating the findings. This was supported by comparisons within and across individual, paired, and sets-of-three occurrences of CRM keywords in the article abstracts. ^ Results show that there is a lack of holistic thinking and discussion of CRM in both academics and industry which is required to understand how the people, process, and technology in CRM impact each other to affect successful implementation. Industry has to get their heads around CRM and holistically understand how these important dimensions affect each other. Only then will organizational learning occur, and overtime result in superior processes leading to strong profitable customer relationships and a hard to imitate competitive advantage. ^
Resumo:
An Automatic Vehicle Location (AVL) system is a computer-based vehicle tracking system that is capable of determining a vehicle's location in real time. As a major technology of the Advanced Public Transportation System (APTS), AVL systems have been widely deployed by transit agencies for purposes such as real-time operation monitoring, computer-aided dispatching, and arrival time prediction. AVL systems make a large amount of transit performance data available that are valuable for transit performance management and planning purposes. However, the difficulties of extracting useful information from the huge spatial-temporal database have hindered off-line applications of the AVL data. ^ In this study, a data mining process, including data integration, cluster analysis, and multiple regression, is proposed. The AVL-generated data are first integrated into a Geographic Information System (GIS) platform. The model-based cluster method is employed to investigate the spatial and temporal patterns of transit travel speeds, which may be easily translated into travel time. The transit speed variations along the route segments are identified. Transit service periods such as morning peak, mid-day, afternoon peak, and evening periods are determined based on analyses of transit travel speed variations for different times of day. The seasonal patterns of transit performance are investigated by using the analysis of variance (ANOVA). Travel speed models based on the clustered time-of-day intervals are developed using important factors identified as having significant effects on speed for different time-of-day periods. ^ It has been found that transit performance varied from different seasons and different time-of-day periods. The geographic location of a transit route segment also plays a role in the variation of the transit performance. The results of this research indicate that advanced data mining techniques have good potential in providing automated techniques of assisting transit agencies in service planning, scheduling, and operations control. ^
Resumo:
The purpose of this qualitative study was to explore the academic and nonacademic experiences of self-identified first-generation college students who left college before their second year. The study sought to find how the experiences might have affected the students' decision to depart. The case study method was used to investigate these college students who attended Florida International University. Semi-structured interviews were conducted with six ex-students who identified themselves as first-generation college students. The narrative data from the interviews were transcribed, coded, and analyzed. Analysis was informed by Pascarella, Pierson, Wolniak, and Terenzini's (2004) theoretical framework of important college academic and nonacademic experiences. An audit trail was kept and the data was triangulated by using multiple sources to establish certain findings. The most critical tool for enhancing trustworthiness was the use of member checking. I also received ongoing feedback from my major professor and committee throughout the dissertation process. The participants reported the following academic experiences: (a) patterns of coursework; (b) course-related interactions with peers; (c) relationships with faculty; (d) class size; (e) academic advisement; (f) orientation and peer advisors; and (e) financial aid. The participants reported the following nonacademic experiences; (f) on- or off- campus employment; (g) on- or off-campus residence; (h) participation in extracurricular activities; (i) noncourse-related peer relationships; (j) commuting and parking; and (k) FIU as an HSI. Isolationism and poor fit with the university were the most prevalent reasons for departure. The reported experiences of these first-generation college students shed light on those experiences that contributed to their departure. University administrators should give additional attention to these stories in an effort to improve retention strategies for this population. All but two of the participants went on to enroll in other institutions and reported good experiences with their new institutions. Recommendations are provided for continued research concerning how to best meet the needs of college students like the participants; students who have not learned from their parents about higher education financial aid, academic advisement, and orientation.
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Resumo:
This study explained the diversity of corporate financial practices in two nations. Existing studies have emphasized the reliance on equity finance in U.S. firms and bank loans in Japanese firms. In fact, patterns of corporate finance were much more complex. Financial institutions, which were created by national economic policy and regulation, affected corporate financial practices, but corporate financial practices often differed from what policymakers expected. Differences in corporate financial practices between nations also reflected differences in the mixture of industries in each nation. Many factors such as the amount of fixed capital, the process of production, the level of risk, the degree of innovation, and the importance of the industry in the national economy affected corporate financial practices. In addition, corporate financial practices within each nation differed from firm to firm due to managers’ considerations about stock ownership, which would affect their control power; corporate finance was closely related to control over management through ownership. To explain these complexities of corporate financial practices, the study linked corporate finance with the development of financial institutions in the United States and in Japan. While financial institutions affected corporate financial practices, the response of the firms to financial institutions and opportunities were diverse. The study also attempted to grasp variations in corporate financial practices by dealing with companies in three sectors: railroads, public utilities, and manufacturing. Finally, the study examined the structure of firm ownership. Contradictory to the widely held belief that U.S. firms distributed securities more widely to the public than did Japanese firms, many large American firms remained closely held, while some Japanese counterparts built publicly-held corporations.
Resumo:
Since the 1990s, scholars have paid special attention to public management’s role in theory and research under the assumption that effective management is one of the primary means for achieving superior performance. To some extent, this was influenced by popular business writings of the 1980s as well as the reinventing literature of the 1990s. A number of case studies but limited quantitative research papers have been published showing that management matters in the performance of public organizations. ^ My study examined whether or not management capacity increased organizational performance using quantitative techniques. The specific research problem analyzed was whether significant differences existed between high and average performing public housing agencies on select criteria identified in the Government Performance Project (GPP) management capacity model, and whether this model could predict outcome performance measures in a statistically significant manner, while controlling for exogenous influences. My model included two of four GPP management subsystems (human resources and information technology), integration and alignment of subsystems, and an overall managing for results framework. It also included environmental and client control variables that were hypothesized to affect performance independent of management action. ^ Descriptive results of survey responses showed high performing agencies with better scores on most high performance dimensions of individual criteria, suggesting support for the model; however, quantitative analysis found limited statistically significant differences between high and average performers and limited predictive power of the model. My analysis led to the following major conclusions: past performance was the strongest predictor of present performance; high unionization hurt performance; and budget related criterion mattered more for high performance than other model factors. As to the specific research question, management capacity may be necessary but it is not sufficient to increase performance. ^ The research suggested managers may benefit by implementing best practices identified through the GPP model. The usefulness of the model could be improved by adding direct service delivery to the model, which may also improve its predictive power. Finally, there are abundant tested concepts and tools designed to improve system performance that are available for practitioners designed to improve management subsystem support of direct service delivery.^
Resumo:
Genetic diversity can be used to describe patterns of gene flow within and between local and regional populations. The Florida Everglades experiences seasonal fluctuations in water level that can influence local population extinction and recolonization dynamics. In addition, this expansive wetland has been divided into water management regions by canals and levees. These combined factors can affect genetic diversity and population structure of aquatic organisms in the Everglades. We analyzed allelic variation at six DNA microsatellite loci to examine the population structure of spotted sunfish (Lepomis punctatus) from the Everglades. We tested the hypothesis that recurrent local extinction and recent regional divisions have had an effect on patterns of genetic diversity. No marked differences were observed in comparisons of the heterozygosity values of sites within and among water management units. No evidence of isolation by distance was detected in a gene flow and distance correlation between subpopulations. Confidence intervals for the estimated F-statistic values crossed zero, indicating that there was no significant genetic difference between subpopulations within a region or between regions. Notably, the genetic variation among subpopulations in a water conservation area was greater than variation among regions (Fsp>FPT). These data indicate that the spatial scale of recolonization following local extinction appears to be most important within water management units.
Resumo:
In their dialogue entitled - The Food Service Industry Environment: Market Volatility Analysis - by Alex F. De Noble, Assistant Professor of Management, San Diego State University and Michael D. Olsen, Associate Professor and Director, Division of Hotel, Restaurant & Institutional Management at Virginia Polytechnic Institute and State University, De Noble and Olson preface the discussion by saying: “Hospitality executives, as a whole, do not believe they exist in a volatile environment and spend little time or effort in assessing how current and future activity in the environment will affect their success or failure. The authors highlight potential differences that may exist between executives' perceptions and objective indicators of environmental volatility within the hospitality industry and suggest that executives change these perceptions by incorporating the assumption of a much more dynamic environment into their future strategic planning efforts. Objective, empirical evidence of the dynamic nature of the hospitality environment is presented and compared to several studies pertaining to environmental perceptions of the industry.” That weighty thesis statement presumes that hospitality executives/managers do not fully comprehend the environment in which they operate. The authors provide a contrast, which conventional wisdom would seem to support and satisfy. “Broadly speaking, the operating environment of an organization is represented by its task domain,” say the authors. “This task domain consists of such elements as a firm's customers, suppliers, competitors, and regulatory groups.” These are dynamic actors and the underpinnings of change, say the authors by way of citation. “The most difficult aspect for management in this regard tends to be the development of a proper definition of the environment of their particular firm. Being able to precisely define who the customers, competitors, suppliers, and regulatory groups are within the environment of the firm is no easy task, yet is imperative if proper planning is to occur,” De Noble and Olson further contribute to support their thesis statement. The article is bloated, and that’s not necessarily a bad thing, with tables both survey and empirically driven, to illustrate market volatility. One such table is the Bates and Eldredge outline; Table-6 in the article. “This comprehensive outline…should prove to be useful to most executives in expanding their perception of the environment of their firm,” say De Noble and Olson. “It is, however, only a suggested outline,” they advise. “…risk should be incorporated into every investment decision, especially in a volatile environment,” say the authors. De Noble and Olson close with an intriguing formula to gauge volatility in an environment.
Resumo:
The need for elemental analysis of biological matrices such as bone, teeth, and plant matter for sourcing purposes has emerged within the forensic and geochemical laboratories. Trace elemental analyses for the comparison of materials such as glass by inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS has been shown to offer a high degree of discrimination between different manufacturing sources. Unit resolution ICP-MS instruments may suffer from some polyatomic interferences including 40Ar16O+, 40Ar 16O1H+, and 40Ca 16O+ that affect iron measurement at trace levels. Iron is an important element in the analysis of glass and also of interest for the analysis of several biological matrices. A comparison of the analytical performance of two different ICP-MS systems for iron analysis in glass for determining the method detection limits (MDLs), accuracy, and precision of the measurement is presented. Acid digestion and laser ablation methods are also compared. Iron polyatomic interferences were reduced or resolved by using dynamic reaction cell and high resolution ICP-MS. MDLs as low as 0.03 μg g-1 and 0.14 μg g-1 for laser ablation and solution based analyses respectively were achieved. The use of helium as a carrier gas demonstrated improvement in the detection limits of both iron isotopes (56Fe and 57Fe) in medium resolution for the HR-ICP-MS and with a dynamic reaction cell (DRC) coupled to a quadrupole ICP-MS system. ^ The development and application of robust analytical methods for the quantification of trace elements in biological matrices has lead to a better understanding of the potential utility of these measurements in forensic chemical analyses. Standard reference materials (SRMs) were used in the development of an analytical method using HR-ICP-MS and LA-HR-ICP-MS that was subsequently applied on the analysis of real samples. Bone, teeth and ashed marijuana samples were analyzed with the developed method. ^ Elemental analysis of bone samples from 12 different individuals provided discrimination between individuals, when femur and humerus bones were considered separately. Discrimination of 14 teeth samples based on elemental composition was achieved with the exception of one case where samples from the same individual were not associated with each other. The discrimination of 49 different ashed plant (cannabis) samples was achieved using the developed method. ^
Resumo:
Voice communication systems such as Voice-over IP (VoIP), Public Switched Telephone Networks, and Mobile Telephone Networks, are an integral means of human tele-interaction. These systems pose distinctive challenges due to their unique characteristics such as low volume, burstiness and stringent delay/loss requirements across heterogeneous underlying network technologies. Effective quality evaluation methodologies are important for system development and refinement, particularly by adopting user feedback based measurement. Presently, most of the evaluation models are system-centric (Quality of Service or QoS-based), which questioned us to explore a user-centric (Quality of Experience or QoE-based) approach as a step towards the human-centric paradigm of system design. We research an affect-based QoE evaluation framework which attempts to capture users' perception while they are engaged in voice communication. Our modular approach consists of feature extraction from multiple information sources including various affective cues and different classification procedures such as Support Vector Machines (SVM) and k-Nearest Neighbor (kNN). The experimental study is illustrated in depth with detailed analysis of results. The evidences collected provide the potential feasibility of our approach for QoE evaluation and suggest the consideration of human affective attributes in modeling user experience.