435 resultados para analytics
Processo de planejamento estratégico em universidade pública: o caso da Universidade Federal do Pará
Resumo:
The goal of this research is to check if the strategic planning developed between 2001 and 2009 into the State University of Para (Universidade Federal do Pará - UFPA) was consolidated into its Academic Centers as a management practice. To this end, we identified the planning formalization degree of the Academic Centers, the conceived tools for the planning, the conception and the methodological process used in the tools elaboration, as also its implementation. The research used a qualitative approach: it is descriptive and it uses the case study technique. The data were gathered from primary and secondary sources, through bibliography, documents, and field researches through semi-structure interviews. The analysis and data interpretation were done by each investigated Academic Center from the analytics categories guided by the specifics goals. We used theoretic fundamental based principles and the university as a study empiric reference based on its structure analysis, organizational processes and institutional strategic plan. We inspected how the strategic planning process was developed throughout the fixed period and how the investigated Academic Centers are from the collected documents and interviews. The theoretical foundation was built from three axis: the Brazilian undergraduate and posgraduate education system; the university itself including its singularity and complexity as an organization; and the planning as a strategic management process. The main results show us that the UFPA has updated regulatory milestones, presenting organizational structure, laws, instructions, manuals and deployed management model that give the strategic planning development conditions beyond its administration, i. e., into its Academic Centers. The centers also present those established milestones and develop the basic planning processes of the institution. Those processes are conceived based on the institutional strategic planning and the managers mainly use the procedural orientation defined by the university management, from where the conceptual foundation is originated and propagated. According to the literature and to the research done in this work, we can conclude that the Academic Centers from the UFPA developed the strategic planning practice. This planning is organized and founded and guided the plans and decisions which avoided the disordered management and, according to the managers, allowed the advances and performance improvement. We can conclude that the UFPA built an important foundation with respect to the management professionalization. On the other hand, we can not conclude that the management practice is consolidated since there are weaknesses into the structuring of the technical teams and there is not any management tool for the implementation of the elaborated plans
Resumo:
The human factor is often recognised as a major aspect of cyber-security research. Risk and situational perception are identified as key factors in the decision making process, often playing a lead role in the adoption of security mechanisms. However, risk awareness and perception have been poorly investigated in the field of eHealth wearables. Whilst end-users often have limited understanding of privacy and security of wearables, assessing the perceived risks and consequences will help shape the usability of future security mechanisms. This paper present a survey of the the risks and situational awareness in eHealth services. An analysis of the lack of security and privacy measures in connected health devices is described with recommendations to circumvent critical situations.
Resumo:
Kandidaatintyö on toteutettu kirjallisuuskatsauksena, jonka tavoitteena on selvittää data-analytiikan käyttökohteita ja datan hyödyntämisen vaikutusta liiketoimintaan. Työ käsittelee data-analytiikan käyttöä ja datan tehokkaan hyödyntämisen haasteita. Työ on rajattu tarkastelemaan yrityksen talouden ohjausta, jossa analytiikkaa käytetään johdon ja rahoituksen laskentatoimessa. Datan määrän eksponentiaalinen kasvunopeus luo data-analytiikan käytölle uusia haasteita ja mahdollisuuksia. Datalla itsessään ei kuitenkaan ole suurta arvoa yritykselle, vaan arvo syntyy prosessoinnin kautta. Vaikka data-analytiikkaa tutkitaan ja käytetään jo runsaasti, se tarjoaa paljon nykyisiä sovelluksia suurempia mahdollisuuksia. Yksi työn keskeisimmistä tuloksista on, että data-analytiikalla voidaan tehostaa johdon laskentatoimea ja helpottaa rahoituksen laskentatoimen tehtäviä. Tarjolla olevan datan määrä kasvaa kuitenkin niin nopeasti, että käytettävissä oleva teknologia ja osaamisen taso eivät pysy kehityksessä mukana. Varsinkin big datan laajempi käyttöönotto ja sen tehokas hyödyntäminen vaikuttavat jatkossa talouden ohjauksen käytäntöihin ja sovelluksiin yhä enemmän.
Resumo:
In November 2015-March 2016, I assigned my Graduate Assistant, David Durden, a project to compile usage statistics and trends for digitized collections between 2013-2015 from UMD Digital Collections and our contributions to the Internet Archive between 2008-2015. The original intent of the project was to provide usage metrics to assist the Digitization Initiatives Committee in prioritizing projects or content areas. The project also uncovered trends that should impact how we think about making digital collections discoverable and accessible. For example, if 50-60% of traffic into UMD Digital Collections comes from outside the University or College Park, MD, how will this impact the potential usage of content when access is restricted to campus due to licensing, copyright, or ownership restrictions? With a growing population using mobile browsers, how will a flash-based viewer restrict users’ access to content? How might we develop content or its discoverability for a growing social media user base? In this talk, I will briefly discuss the usage trends for the represented collections, how we may use these in prioritizing future projects, and issues I will discuss with collection managers as we develop project plans and the Manager of Digital Programs and Initiatives as we develop the digital collections repository.
Resumo:
This presentation was one of four during a Mid-Atlantic Regional Archives Conference presentation on April 15, 2016. Digitization of collections can help to improve internal workflows, make materials more accessible, and create new and engaging relationships with users. Laurie Gemmill Arp will discuss the LYRASIS Digitization Collaborative, created to assist institutions with their digitization needs, and how it has worked to help institutions increase connections with users. Robin Pike from the University of Maryland will discuss how they factor requests for access into selection for digitization and how they track the use of digitized materials. Laura Drake Davis of James Madison University will discuss the establishment of a formal digitization program, its impact on users, and the resulting increased use of their collections. Linda Tompkins-Baldwin will discuss Digital Maryland’s partnership with the Digital Public Library of America to provide access to archives held by institutions without a digitization program.
Resumo:
With the world of professional sports shifting towards employing better sport analytics, the demand for vision-based performance analysis is growing increasingly in recent years. In addition, the nature of many sports does not allow the use of any kind of sensors or other wearable markers attached to players for monitoring their performances during competitions. This provides a potential application of systematic observations such as tracking information of the players to help coaches to develop their visual skills and perceptual awareness needed to make decisions about team strategy or training plans. My PhD project is part of a bigger ongoing project between sport scientists and computer scientists involving also industry partners and sports organisations. The overall idea is to investigate the contribution technology can make to the analysis of sports performance on the example of team sports such as rugby, football or hockey. A particular focus is on vision-based tracking, so that information about the location and dynamics of the players can be gained without any additional sensors on the players. To start with, prior approaches on visual tracking are extensively reviewed and analysed. In this thesis, methods to deal with the difficulties in visual tracking to handle the target appearance changes caused by intrinsic (e.g. pose variation) and extrinsic factors, such as occlusion, are proposed. This analysis highlights the importance of the proposed visual tracking algorithms, which reflect these challenges and suggest robust and accurate frameworks to estimate the target state in a complex tracking scenario such as a sports scene, thereby facilitating the tracking process. Next, a framework for continuously tracking multiple targets is proposed. Compared to single target tracking, multi-target tracking such as tracking the players on a sports field, poses additional difficulties, namely data association, which needs to be addressed. Here, the aim is to locate all targets of interest, inferring their trajectories and deciding which observation corresponds to which target trajectory is. In this thesis, an efficient framework is proposed to handle this particular problem, especially in sport scenes, where the players of the same team tend to look similar and exhibit complex interactions and unpredictable movements resulting in matching ambiguity between the players. The presented approach is also evaluated on different sports datasets and shows promising results. Finally, information from the proposed tracking system is utilised as the basic input for further higher level performance analysis such as tactics and team formations, which can help coaches to design a better training plan. Due to the continuous nature of many team sports (e.g. soccer, hockey), it is not straightforward to infer the high-level team behaviours, such as players’ interaction. The proposed framework relies on two distinct levels of performance analysis: low-level performance analysis, such as identifying players positions on the play field, as well as a high-level analysis, where the aim is to estimate the density of player locations or detecting their possible interaction group. The related experiments show the proposed approach can effectively explore this high-level information, which has many potential applications.
Resumo:
Americans are accustomed to a wide range of data collection in their lives: census, polls, surveys, user registrations, and disclosure forms. When logging onto the Internet, users’ actions are being tracked everywhere: clicking, typing, tapping, swiping, searching, and placing orders. All of this data is stored to create data-driven profiles of each user. Social network sites, furthermore, set the voluntarily sharing of personal data as the default mode of engagement. But people’s time and energy devoted to creating this massive amount of data, on paper and online, are taken for granted. Few people would consider their time and energy spent on data production as labor. Even if some people do acknowledge their labor for data, they believe it is accessory to the activities at hand. In the face of pervasive data collection and the rising time spent on screens, why do people keep ignoring their labor for data? How has labor for data been become invisible, as something that is disregarded by many users? What does invisible labor for data imply for everyday cultural practices in the United States? Invisible Labor for Data addresses these questions. I argue that three intertwined forces contribute to framing data production as being void of labor: data production institutions throughout history, the Internet’s technological infrastructure (especially with the implementation of algorithms), and the multiplication of virtual spaces. There is a common tendency in the framework of human interactions with computers to deprive data and bodies of their materiality. My Introduction and Chapter 1 offer theoretical interventions by reinstating embodied materiality and redefining labor for data as an ongoing process. The middle Chapters present case studies explaining how labor for data is pushed to the margin of the narratives about data production. I focus on a nationwide debate in the 1960s on whether the U.S. should build a databank, contemporary Big Data practices in the data broker and the Internet industries, and the group of people who are hired to produce data for other people’s avatars in the virtual games. I conclude with a discussion on how the new development of crowdsourcing projects may usher in the new chapter in exploiting invisible and discounted labor for data.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
As faculty needs evolve and become increasingly digital, libraries are feeling the pressure to provide relevant new services. At the same time, faculty members are struggling to create and maintain their professional reputations online. We at bepress are happy to announce the new SelectedWorks, the fully hosted, library-curated faculty profile platform that positions the library to better support faculty as well as the institution at large. Beverly Lysobey, Digital Commons and Resource Management Librarian, at Sacred Heart University, says: “Both faculty and administration have been impressed with the services we provide through SelectedWorks; we’re able to show how much our faculty really publishes, and it’s great for professors to get that recognition. We’ve had several faculty members approach us for help making sure their record was complete when they were up for tenure, and we’ve even found articles that authors themselves no longer had access to.” With consistent, organized, institution-branded profiles, SelectedWorks increases campus-wide exposure and supports the research mission of the university. As the only profile platform integrated with the fully hosted Digital Commons suite of publishing and repository services, it also ensures that the institution retains management of its content. Powerful integration with the Digital Commons platform lets the home institution more fully capture the range of scholarship produced on campus, and hosted services facilitate resource consolidation and reduces strain on IT. The new SelectedWorks features a modern, streamlined design that provides compelling display options for the full range of faculty work. It beautifully showcases streaming media, images, data, teaching materials, books – any type of content that researchers now produce as part of their scholarship. Detailed analytics tools let authors and librarians measure global readership and track impact for a variety of campus stakeholders: authors can see the universities, agencies, and businesses that are reading their work, and can easily export reports to use in tenure and promotion dossiers. Janelle Wertzbeger, Assistant Dean and Director of Scholarly Communications at Gettysburg College’s Musselman Library, says, “The new author dashboard maps and enhanced readership are SO GOOD. Every professor up for promotion & tenure should use them!” And of course, SelectedWorks is fully backed by the continual efforts of the bepress development team to provide maximum discoverability to search engines, increasing impact for faculty and institutions alike: Reverend Edward R. Udovic, Vice President for Teaching and Learning Resources at DePaul University, says, “In the last several months downloads of my scholarship from my [SelectedWorks] site have far surpassed the total distribution of all my work in the previous twenty five years.”
Resumo:
We at bepress are excited to announce the beta launch of the Expert Gallery, a new product for institutions eager to highlight the rich expertise of their faculty. The Expert Gallery facilitates the valuable work of connecting an institution’s researchers with opportunities that might otherwise be missed. Groups such as Marketing and Communications and the Office of Research can use the product to better land funding opportunities, speaking engagements, and professional collaborations for top faculty members. The Expert Gallery is designed to let stakeholders within and outside of the institution find researchers by interest, skill set, and research emphasis: simple searching and browsing, along with the flexibility to create and display custom galleries, helps facilitate targeted discovery for experts on campus. A built-in, rich toolset lets institutions organize, manage, and connect their researchers to the right opportunities and interested parties outside the institution. While most expert galleries contain just biographical information and a bibliography, integration of the bepress Expert Gallery with SelectedWorks profiles lets researchers prove their expertise with a full picture of their scholarly research, including published and unpublished works, datasets, teaching materials, and media appearances. Launching the Expert Gallery as a new product reflects an important expansion of bepress’s mission. For years we’ve helped libraries reclaim their central role through providing services across campus. We’ve especially focused on supporting the library in its important efforts to promote the institution through the scholarship it produces. With the Expert Gallery, the library can meet its campus’s needs to go beyond demonstrating the value of its scholarship. Now the library can offer a way to promote the institution through the rich skills of the people who make it unique. We plan to continue on this path of helping institutions maximize the impact of people as well as their people’s scholarship. In early 2017 we will launch a suite of services that includes SelectedWorks, the Expert Gallery, and a set of faculty reporting and analytics tools.
Resumo:
Rigid adherence to pre-specified thresholds and static graphical representations can lead to incorrect decisions on merging of clusters. As an alternative to existing automated or semi-automated methods, we developed a visual analytics approach for performing hierarchical clustering analysis of short time-series gene expression data. Dynamic sliders control parameters such as the similarity threshold at which clusters are merged and the level of relative intra-cluster distinctiveness, which can be used to identify "weak-edges" within clusters. An expert user can drill down to further explore the dendrogram and detect nested clusters and outliers. This is done by using the sliders and by pointing and clicking on the representation to cut the branches of the tree in multiple-heights. A prototype of this tool has been developed in collaboration with a small group of biologists for analysing their own datasets. Initial feedback on the tool has been positive.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.
Resumo:
The work presented herein focused on the automation of coordination-driven self assembly, exploring methods that allow syntheses to be followed more closely while forming new ligands, as part of the fundamental study of the digitization of chemical synthesis and discovery. Whilst the control and understanding of the principle of pre-organization and self-sorting under non-equilibrium conditions remains a key goal, a clear gap has been identified in the absence of approaches that can permit fast screening and real-time observation of the reaction process under different conditions. A firm emphasis was thus placed on the realization of an autonomous chemical robot, which can not only monitor and manipulate coordination chemistry in real-time, but can also allow the exploration of a large chemical parameter space defined by the ligand building blocks and the metal to coordinate. The self-assembly of imine ligands with copper and nickel cations has been studied in a multi-step approach using a self-built flow system capable of automatically controlling the liquid-handling and collecting data in real-time using a benchtop MS and NMR spectrometer. This study led to the identification of a transient Cu(I) species in situ which allows for the formation of dimeric and trimeric carbonato bridged Cu(II) assemblies. Furthermore, new Ni(II) complexes and more remarkably also a new binuclear Cu(I) complex, which usually requires long and laborious inert conditions, could be isolated. The study was then expanded to the autonomous optimization of the ligand synthesis by enabling feedback control on the chemical system via benchtop NMR. The synthesis of new polydentate ligands has emerged as a result of the study aiming to enhance the complexity of the chemical system to accelerate the discovery of new complexes. This type of ligand consists of 1-pyridinyl-4-imino-1,2,3-triazole units, which can coordinate with different metal salts. The studies to test for the CuAAC synthesis via microwave lead to the discovery of four new Cu complexes, one of them being a coordination polymer obtained from a solvent dependent crystallization technique. With the goal of easier integration into an automated system, copper tubing has been exploited as the chemical reactor for the synthesis of this ligand, as it efficiently enhances the rate of the triazole formation and consequently promotes the formation of the full ligand in high yields within two hours. Lastly, the digitization of coordination-driven self-assembly has been realized for the first time using an in-house autonomous chemical robot, herein named the ‘Finder’. The chemical parameter space to explore was defined by the selection of six variables, which consist of the ligand precursors necessary to form complex ligands (aldehydes, alkineamines and azides), of the metal salt solutions and of other reaction parameters – duration, temperature and reagent volumes. The platform was assembled using rounded bottom flasks, flow syringe pumps, copper tubing, as an active reactor, and in-line analytics – a pH meter probe, a UV-vis flow cell and a benchtop MS. The control over the system was then obtained with an algorithm capable of autonomously focusing the experiments on the most reactive region (by avoiding areas of low interest) of the chemical parameter space to explore. This study led to interesting observations, such as metal exchange phenomena, and also to the autonomous discovery of self assembled structures in solution and solid state – such as 1-pyridinyl-4-imino-1,2,3-triazole based Fe complexes and two helicates based on the same ligand coordination motif.
Processo de planejamento estratégico em universidade pública: o caso da Universidade Federal do Pará
Resumo:
The goal of this research is to check if the strategic planning developed between 2001 and 2009 into the State University of Para (Universidade Federal do Pará - UFPA) was consolidated into its Academic Centers as a management practice. To this end, we identified the planning formalization degree of the Academic Centers, the conceived tools for the planning, the conception and the methodological process used in the tools elaboration, as also its implementation. The research used a qualitative approach: it is descriptive and it uses the case study technique. The data were gathered from primary and secondary sources, through bibliography, documents, and field researches through semi-structure interviews. The analysis and data interpretation were done by each investigated Academic Center from the analytics categories guided by the specifics goals. We used theoretic fundamental based principles and the university as a study empiric reference based on its structure analysis, organizational processes and institutional strategic plan. We inspected how the strategic planning process was developed throughout the fixed period and how the investigated Academic Centers are from the collected documents and interviews. The theoretical foundation was built from three axis: the Brazilian undergraduate and posgraduate education system; the university itself including its singularity and complexity as an organization; and the planning as a strategic management process. The main results show us that the UFPA has updated regulatory milestones, presenting organizational structure, laws, instructions, manuals and deployed management model that give the strategic planning development conditions beyond its administration, i. e., into its Academic Centers. The centers also present those established milestones and develop the basic planning processes of the institution. Those processes are conceived based on the institutional strategic planning and the managers mainly use the procedural orientation defined by the university management, from where the conceptual foundation is originated and propagated. According to the literature and to the research done in this work, we can conclude that the Academic Centers from the UFPA developed the strategic planning practice. This planning is organized and founded and guided the plans and decisions which avoided the disordered management and, according to the managers, allowed the advances and performance improvement. We can conclude that the UFPA built an important foundation with respect to the management professionalization. On the other hand, we can not conclude that the management practice is consolidated since there are weaknesses into the structuring of the technical teams and there is not any management tool for the implementation of the elaborated plans