955 resultados para Business Administration, Management|Computer Science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las Universidades han tenido que adaptarse a los nuevos modelos de comunicación surgidos en la época de Internet. Dentro de estos nuevos paradigmas las redes sociales han irrumpido y Twitter se ha establecido como una de las más importantes. El objetivo de esta investigación es demostrar que existe una relación entre la presencia online de una Universidad, definida por la cantidad de información disponible en Internet, y su cuenta en Twitter. Para ello se analizó la relación entre la presencia online y los perfiles oficiales de las cinco universidades del País Vasco y Navarra. Los resultados demostraron la existencia de una correlación significativa entre la presencia online de las instituciones y el número de seguidores de sus respectivas cuentas. En segundo lugar, esta investigación se planteó si Twitter puede servir para potenciar la presencia online de una Universidad. Es por eso que se formuló una segunda hipótesis que buscaba analizar si tener varias cuentas en Twitter aumentaría la presencia online de las Universidades. Los hallazgos para esta segunda hipótesis demostraron una correlación muy significativa entre tener varios perfiles en Twitter y la presencia online de las Universidades. Así queda demostrada la importancia de la presencia online para las cuentas de Twitter y la relevancia de Twitter a la hora de potenciar la presencia online de los centros.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La relevancia del conocimiento como input del proceso productivo ha aumentado la complejidad de la contratación en el mercado de trabajo cualificado. Como consecuencia de ello, se ha generado un proceso de reflexión sobre la adecuación de la acreditación universitaria a las necesidades del mercado de trabajo. Académicos, gerentes y expertos en el mercado de trabajadores altamente cualificados han tenido parte en esta reflexión durante mucho tiempo. Este trabajo tiene como objetivo la identificación de las competencias profesionales con mayor relevancia en la empleabilidad de los graduados en Economía y Empresa. El análisis se basa en una investigación cualitativa que toma como fuentes de información las opiniones de los empleadores. La información para el estudio ha sido obtenida mediante entrevistas en profundidad y la realización de un grupo de discusión. En este proceso han participado empresarios, responsables del servicio de prácticas de la Universidad de Barcelona, expertos y responsables de empresa de colocación, representantes de organizaciones empresariales y profesores universitarios. La atención se centra en la percepción que los entrevistados tienen del requerimiento de conocimientos, habilidades y actitudes, en el grado en que los desarrollan y en los cambios que se necesitarían para lograr una mejor correspondencia entre las competencias adquiridas por los graduados y las requeridas por el mercado de trabajo. A partir de la clasificación de las competencias profesionales (proyecto Tuning), el estudio pone de relieve la importancia otorgada por los empleadores a de las competencias genéricas. No obstante, se observan diferencias valorativas según la tipología de empresas. Asimismo, se evidencian déficits en algunos aspectos relevantes, como la formación práctica y la capacidad de iniciativa, de análisis o de organización. Por último, de las opiniones recogidas también se constata la necesidad de aproximar la universidad al sistema productivo, al menos en el campo económico-empresarial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of simulation games as a pedagogic method is well established though its effective use is context-driven. This study adds to the increasing growing body of empirical evidence of the effectiveness of simulation games but more importantly emphasises why by explaining the instructional design implemented reflecting best practices. This multimethod study finds evidence that student learning was enhanced through the use of simulation games, reflected in the two key themes; simulation games as a catalyst for learning and simulation games as a vehicle for learning. In so doing the research provides one of the few empirically based studies that support simulation games in enhancing learning and, more importantly, contextualizes the enhancement in terms of the instructional design of the curriculum. This research should prove valuable for those with an academic interest in the use of simulation games and management educators who use, or are considering its use. Further, the findings contribute to the academic debate concerning the effective implementation of simulation game-based training in business and management education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This qualitative study explores the subjective experience of being led by investigating the impact of their Implicit Leadership Theories (ILTs) on followers’ cognitive processes, affective responses and behavioural intentions towards leadership-claimants. The study explores how such responses influence the quality of hierarchical work-place relationships using a framework based on Leader-Member Exchange (LMX) Theory. The research uses focus groups to elicit descriptions of ILTs held by forty final year undergraduate Business and Management students. The data was then analysed using an abductive process permitting an interpretative understanding of the meanings participants attach to their past experiences and future expectations. This research addresses a perceived gap by making a theoretical contribution to knowledge and understanding in this field, focusing on how emotional responses affect their behaviour, how this impacts on organisational outcomes, and what the implications are for HRD practitioners. The findings support previous research into the content and structure of ILTs but extend these by examining the impact of affect on workplace behaviour. Findings demonstrate that where follower ILT needs are met then positive outcomes ensued for participants, their superiors, and their organisations. Conversely, where follower ILT needs are not matched, various negative effects emerged ranging from poor performance and impaired well-being, to withdrawal behaviour and outright rebellion. The research findings suggest dynamic reciprocal links amongst outcomes, behaviours, and LMX, and demonstrate an alignment of cognitive, emotional and behavioural responses that correspond to either high-LMX or low-LMX relationships, with major impacts on job satisfaction, commitment and well-being. Copyright

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although business simulations are widely used in management education, there is no consensus about how to optimise their application. Our research explores the use of business simulations as a dimension of a blended learning pedagogic approach for undergraduate business education. Accepting that few best-practice prescriptive models for the design and implementation of simulations in this context have been presented, and that there is little empirical evidence for the claims made by proponents of such models, we address the lacuna by considering business student perspectives on the use of simulations. We then intersect available data with espoused positive outcomes made by the authors of a prescriptive model. We find the model to be essentially robust and offer evidence to support this position. In so doing we provide one of the few empirically based studies to support claims made by proponents of simulations in business education. The research should prove valuable for those with an academic interest in the use of simulations, either as a blended learning dimension or as a stand-alone business education activity. Further, the findings contribute to the academic debate surrounding the use and efficacy of simulation-based training [SBT] within business and management education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part 4: Transition Towards Product-Service Systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prior research shows that electronic word of mouth (eWOM) wields considerable influence over consumer behavior. However, as the volume and variety of eWOM grows, firms are faced with challenges in analyzing and responding to this information. In this dissertation, I argue that to meet the new challenges and opportunities posed by the expansion of eWOM and to more accurately measure its impacts on firms and consumers, we need to revisit our methodologies for extracting insights from eWOM. This dissertation consists of three essays that further our understanding of the value of social media analytics, especially with respect to eWOM. In the first essay, I use machine learning techniques to extract semantic structure from online reviews. These semantic dimensions describe the experiences of consumers in the service industry more accurately than traditional numerical variables. To demonstrate the value of these dimensions, I show that they can be used to substantially improve the accuracy of econometric models of firm survival. In the second essay, I explore the effects on eWOM of online deals, such as those offered by Groupon, the value of which to both consumers and merchants is controversial. Through a combination of Bayesian econometric models and controlled lab experiments, I examine the conditions under which online deals affect online reviews and provide strategies to mitigate the potential negative eWOM effects resulting from online deals. In the third essay, I focus on how eWOM can be incorporated into efforts to reduce foodborne illness, a major public health concern. I demonstrate how machine learning techniques can be used to monitor hygiene in restaurants through crowd-sourced online reviews. I am able to identify instances of moral hazard within the hygiene inspection scheme used in New York City by leveraging a dictionary specifically crafted for this purpose. To the extent that online reviews provide some visibility into the hygiene practices of restaurants, I show how losses from information asymmetry may be partially mitigated in this context. Taken together, this dissertation contributes by revisiting and refining the use of eWOM in the service sector through a combination of machine learning and econometric methodologies.