19 resultados para Existential analytics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of being ‘patient-centric’ is a challenge to many existing healthcare service provision practices. This paper focuses on the issue of referrals, where multiple stakeholders, i.e. general practitioners and patients, are encouraged to make a consensual decision based on patient needs. In this paper, we present an ontology-enabled healthcare service provision, which facilitates both patients and GPs in jointly deciding upon the referral decision. In the healthcare service provision model, we define three types of profile, which represents different stakeholders’ requirements. This model also comprises of a set of healthcare service discovery processes: articulating a service need, matching the need with the healthcare service offerings, and deciding on a best-fit service for acceptance. As a result, the healthcare service provision can carry out coherent analysis using personalised information and iterative processes that deal with requirements change over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses how global financial institutions are using big data analytics within their compliance operations. A lot of previous research has focused on the strategic implications of big data, but not much research has considered how such tools are entwined with regulatory breaches and investigations in financial services. Our work covers two in-depth qualitative case studies, each addressing a distinct type of analytics. The first case focuses on analytics which manage everyday compliance breaches and so are expected by managers. The second case focuses on analytics which facilitate investigation and litigation where serious unexpected breaches may have occurred. In doing so, the study focuses on the micro/data to understand how these tools are influencing operational risks and practices. The paper draws from two bodies of literature, the social studies of information systems and finance to guide our analysis and practitioner recommendations. The cases illustrate how technologies are implicated in multijurisdictional challenges and regulatory conflicts at each end of the operational risk spectrum. We find that compliance analytics are both shaping and reporting regulatory matters yet often firms may have difficulties in recruiting individuals with relevant but diverse skill sets. The cases also underscore the increasing need for financial organizations to adopt robust information governance policies and processes to ease future remediation efforts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

If education is to be about ‘human flourishing’ (De Ruyter, 2004) as well as preparation for adulthood and work, then religious and citizenship education would seem to have a key contribution towards this goal, both offering opportunities for the exploration and development of a robust sense of identity. However, despite the opposition of most religious educators, religious education has been treated by successive UK governments simply as a form of inculcation into a homogenous notion of citizenship based on nominal church attendance. Moreover, the teaching of the relatively new subject of citizenship education, whilst recognising that the sense of identity and allegiance is complex, has not regularly included faith perspectives. I argue that the concept of ‘spiritual development’, which centres on an existential sense of identity, offers a justification for combining lessons in both religious and citizenship education. I conclude on a cautionary note, arguing that pupils need to be given a critical awareness of ways in which such identities can be provided for them by default, particularly since consumer culture increasingly makes use of ‘spiritual’ language and imagery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data on the potential health benefits of dietary flavanols and procyanidins, especially in the context of cardiovascular health, are considerable and continue to accumulate. Significant progress has been made in flavanol analytics and the creation of phytonutrient-content food databases, and novel data emanated from epidemiological investigations as well as dietary intervention studies. However, a comprehensive understanding of the pharmacological properties of flavanols and procyanidins, including their precise mechanisms of action in vivo, and a conclusive, consensus-based accreditation of a causal relationship between intake and health benefits in the context of primary and secondary cardiovascular disease prevention is still outstanding. Thus, the objective of this review is to identify and discuss key questions and gaps that will need to be addressed in order to conclusively demonstrate whether or not dietary flavanols and procyanidins have a role in preventing, delaying the onset of, or treating cardiovascular diseases, and thus improving human life expectancy and quality of life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aircraft Maintenance, Repair and Overhaul (MRO) feedback commonly includes an engineer’s complex text-based inspection report. Capturing and normalizing the content of these textual descriptions is vital to cost and quality benchmarking, and provides information to facilitate continuous improvement of MRO process and analytics. As data analysis and mining tools requires highly normalized data, raw textual data is inadequate. This paper offers a textual-mining solution to efficiently analyse bulk textual feedback data. Despite replacement of the same parts and/or sub-parts, the actual service cost for the same repair is often distinctly different from similar previously jobs. Regular expression algorithms were incorporated with an aircraft MRO glossary dictionary in order to help provide additional information concerning the reason for cost variation. Professional terms and conventions were included within the dictionary to avoid ambiguity and improve the outcome of the result. Testing results show that most descriptive inspection reports can be appropriately interpreted, allowing extraction of highly normalized data. This additional normalized data strongly supports data analysis and data mining, whilst also increasing the accuracy of future quotation costing. This solution has been effectively used by a large aircraft MRO agency with positive results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concepts of on-line transactional processing (OLTP) and on-line analytical processing (OLAP) are often confused with the technologies or models that are used to design transactional and analytics based information systems. This in some way has contributed to existence of gaps between the semantics in information captured during transactional processing and information stored for analytical use. In this paper, we propose the use of a unified semantics design model, as a solution to help bridge the semantic gaps between data captured by OLTP systems and the information provided by OLAP systems. The central focus of this design approach is on enabling business intelligence using not just data, but data with context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

n the past decade, the analysis of data has faced the challenge of dealing with very large and complex datasets and the real-time generation of data. Technologies to store and access these complex and large datasets are in place. However, robust and scalable analysis technologies are needed to extract meaningful information from these datasets. The research field of Information Visualization and Visual Data Analytics addresses this need. Information visualization and data mining are often used complementary to each other. Their common goal is the extraction of meaningful information from complex and possibly large data. However, though data mining focuses on the usage of silicon hardware, visualization techniques also aim to access the powerful image-processing capabilities of the human brain. This article highlights the research on data visualization and visual analytics techniques. Furthermore, we highlight existing visual analytics techniques, systems, and applications including a perspective on the field from the chemical process industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – Commercial real estate is a highly specific asset: heterogeneous, indivisible and with less information transparency than most other commonly held investment assets. These attributes encourage the use of intermediaries during asset acquisition and disposal. However, there are few attempts to explain the use of different brokerage models (with differing costs) in different markets. This study aims to address this gap. Design/methodology/approach – The study analyses 9,338 real estate transactions in London and New York City from 2001 to 2011. Data are provided by Real Capital Analytics and cover over $450 billion of investments in this period. Brokerage trends in the two cities are compared and probit regressions are used to test whether the decision to transact with broker representation varies with investor or asset characteristics. Findings – Results indicate greater use of brokerage in London, especially by purchasers. This persists when data are disaggregated by sector, time or investor type, pointing to the role of local market culture and institutions in shaping brokerage models and transaction costs. Within each city, the nature of the investors involved seems to be a more significant influence on broker use than the characteristics of the assets being traded. Originality/value – Brokerage costs are the single largest non-tax charge to an investor when trading commercial real estate, yet there is little research in this area. This study examines the role of brokers and provides empirical evidence on factors that influence the use and mode of brokerage in two major investment destinations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As we enter an era of ‘big data’, asset information is becoming a deliverable of complex projects. Prior research suggests digital technologies enable rapid, flexible forms of project organizing. This research analyses practices of managing change in Airbus, CERN and Crossrail, through desk-based review, interviews, visits and a cross-case workshop. These organizations deliver complex projects, rely on digital technologies to manage large data-sets; and use configuration management, a systems engineering approach with mid-20th century origins, to establish and maintain integrity. In them, configuration management has become more, rather than less, important. Asset information is structured, with change managed through digital systems, using relatively hierarchical, asynchronous and sequential processes. The paper contributes by uncovering limits to flexibility in complex projects where integrity is important. Challenges of managing change are discussed, considering the evolving nature of configuration management; potential use of analytics on complex projects; and implications for research and practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examines the impact of foreign real estate investment on the US office market capitalization rates. The geographic unit of analysis is MSA and the time period is 2001-2013. Drawing upon a database of commercial real estate transactions provided by Real Capital Analytics, we model the determinants of market capitalization rates with a particular focus on the significance of the proportion of market transactions involving foreign investors. We have employed several econometric techniques to explore the data, potential estimation biases, and test robustness of the results. The results suggest statistically significant effects of foreign investment across 38 US metro areas. It is estimated that, all else equal, a 100 basis points increase in foreign share of total investment in a US metropolitan office market causes about an 8 basis points decrease in the market cap rate.