12 resultados para real world context

em Digital Commons at Florida International University


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Social responsibility (SR) is becoming an increasingly significant component of many firms’ strategic planning decisions. Research has shown that consumers tend to reward socially responsible behavior. However, there has been little testing of the construct in the hospitality industry. Additionally, when other important variables that influence consumer brand loyalty are considered, will brand social responsibility image (BSRI) still play a significant role? This study investigates the importance of SR and its impact on brand loyalty, relative to product quality and service quality in the quick-service restaurant industry. The authors were also interested to learn whether BSRI impacted consumers' image of product and service quality. It was found that BSRI had a positive impact on brand loyalty, product quality, and service quality. However, product quality was a significantly stronger predictor of brand loyalty than BSRI. Where the vast majority of studies of SR have utilized scenario analysis of hypothetical firms, this study utilizes consumers' perceptions of a real-world firm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Blotto in the Lifeboat is a book of poems that investigates natural processes and idiosyncrasies of human societies. Ranging from the absurd to the scientific in tone, the poems in Blotto in the Lifeboat situate themselves on the blurry-line between fact and imagination, employing a style that Thomas Lux describes as “imaginative realism.” The middle of three sections is comprised solely of the long poem, “A Compendium of the True and Wondrous,” which collages remarkable facts and anecdotes to highlight the strange realities of the world and the rapidity of change. The first and third sections contain shorter, narrative poems in which the surreal or comic is often employed. The language of the poems in BLOTTO IN THE LIFEBOAT reflects a similar desire to affix the fantastic to the familiar. Metaphors in the tradition of Elizabeth Bishop and Charles Simic rely on wild leaps of imagination to illuminate the real world.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Moving objects database systems are the most challenging sub-category among Spatio-Temporal database systems. A database system that updates in real-time the location information of GPS-equipped moving vehicles has to meet even stricter requirements. Currently existing data storage models and indexing mechanisms work well only when the number of moving objects in the system is relatively small. This dissertation research aimed at the real-time tracking and history retrieval of massive numbers of vehicles moving on road networks. A total solution has been provided for the real-time update of the vehicles’ location and motion information, range queries on current and history data, and prediction of vehicles’ movement in the near future. To achieve these goals, a new approach called Segmented Time Associated to Partitioned Space (STAPS) was first proposed in this dissertation for building and manipulating the indexing structures for moving objects databases. Applying the STAPS approach, an indexing structure of associating a time interval tree to each road segment was developed for real-time database systems of vehicles moving on road networks. The indexing structure uses affordable storage to support real-time data updates and efficient query processing. The data update and query processing performance it provides is consistent without restrictions such as a time window or assuming linear moving trajectories. An application system design based on distributed system architecture with centralized organization was developed to maximally support the proposed data and indexing structures. The suggested system architecture is highly scalable and flexible. Finally, based on a real-world application model of vehicles moving in region-wide, main issues on the implementation of such a system were addressed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Disk drives are the bottleneck in the processing of large amounts of data used in almost all common applications. File systems attempt to reduce this by storing data sequentially on the disk drives, thereby reducing the access latencies. Although this strategy is useful when data is retrieved sequentially, the access patterns in real world workloads is not necessarily sequential and this mismatch results in storage I/O performance degradation. This thesis demonstrates that one way to improve the storage performance is to reorganize data on disk drives in the same way in which it is mostly accessed. We identify two classes of accesses: static, where access patterns do not change over the lifetime of the data and dynamic, where access patterns frequently change over short durations of time, and propose, implement and evaluate layout strategies for each of these. Our strategies are implemented in a way that they can be seamlessly integrated or removed from the system as desired. We evaluate our layout strategies for static policies using tree-structured XML data where accesses to the storage device are mostly of two kinds - parent-tochild or child-to-sibling. Our results show that for a specific class of deep-focused queries, the existing file system layout policy performs better by 5-54X. For the non-deep-focused queries, our native layout mechanism shows an improvement of 3-127X. To improve performance of the dynamic access patterns, we implement a self-optimizing storage system that performs rearranges popular block accesses on a dedicated partition based on the observed workload characteristics. Our evaluation shows an improvement of over 80% in the disk busy times over a range of workloads. These results show that applying the knowledge of data access patterns for allocation decisions can substantially improve the I/O performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of the study was to explore the geography literacy, attitudes and experiences of Florida International University (FIU) freshman students scoring at the low and high ends of a geography literacy survey. The Geography Literacy and ABC Models formed the conceptual framework. Participants were freshman students enrolled in the Finite Math course at FIU. Since it is assumed that students who perform poorly on geography assessments do not have an interest in the subject, testing and interviewing students allowed the researcher to explore the assumption. In Phase I, participants completed the Geography Literacy Survey (GLS) with items taken from the 2010 NAEP Geography Subject Area Assessment. The low 35% and high 20% performers were invited for Phase II, which consisted of semi-structured interviews. A total of 187 students participated in Phase I and 12 in Phase II. The primary research question asked was what are the geography attitudes and experiences of freshman students scoring at the low and high ends of a geographical literacy survey? The students had positive attitudes regardless of how they performed on the GLS. The study included a quantitative sub-question regarding the performance of the students on the GLS. The students’ performance on the GLS was equivalent to the performance of 12th grade students from the NAEP Assessment. There were three qualitative sub-questions from which the following themes were identified: the students’ definition of geography is limited, students recall more out of school experiences with geography, and students find geography valuable. In addition, there were five emergent themes: there is a concern regarding a lack of geographical knowledge, rote memorization of geographical content is overemphasized, geographical concepts are related to other subjects, taking the high school level AP Human Geography course is powerful, and there is a need for real-world applications of geographical knowledge. The researcher offered as suggestions for practice to reposition geography in our schools to avoid misunderstandings, highlight its interconnectedness to other fields, connect the material to real world events/daily decision-making, make research projects meaningful, partner with local geographers, and offer a mandatory geography courses at all educational levels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Online Social Network (OSN) services provided by Internet companies bring people together to chat, share the information, and enjoy the information. Meanwhile, huge amounts of data are generated by those services (they can be regarded as the social media ) every day, every hour, even every minute, and every second. Currently, researchers are interested in analyzing the OSN data, extracting interesting patterns from it, and applying those patterns to real-world applications. However, due to the large-scale property of the OSN data, it is difficult to effectively analyze it. This dissertation focuses on applying data mining and information retrieval techniques to mine two key components in the social media data — users and user-generated contents. Specifically, it aims at addressing three problems related to the social media users and contents: (1) how does one organize the users and the contents? (2) how does one summarize the textual contents so that users do not have to go over every post to capture the general idea? (3) how does one identify the influential users in the social media to benefit other applications, e.g., Marketing Campaign? The contribution of this dissertation is briefly summarized as follows. (1) It provides a comprehensive and versatile data mining framework to analyze the users and user-generated contents from the social media. (2) It designs a hierarchical co-clustering algorithm to organize the users and contents. (3) It proposes multi-document summarization methods to extract core information from the social network contents. (4) It introduces three important dimensions of social influence, and a dynamic influence model for identifying influential users.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.