869 resultados para Multidimensional approach
Resumo:
Entertainment is a key cultural category. Yet the definition of entertainment can differ depending upon whom one asks. This article maps out understandings of entertainment in three key areas. Within industrial discourses, entertainment is defined by a commercial business model. Within evaluative discourses used by consumers and critics, it is understood through an aesthetic system that privileges emotional engagement, story, speed and vulgarity. Within academia, entertainment has not been a key organizing concept within the humanities, despite the fact that it is one of the central categories used by producers and consumers of culture. It has been important within psychology, where entertainment is understood in a solipsistic sense as being anything that an individual finds entertaining. Synthesizing these approaches, the authors propose a cross-sectoral definition of entertainment as ‘audience-centred commercial culture’.
Resumo:
With the introduction of the Personally Controlled Health Record (PCEHR), the Australian public is being asked to accept greater responsibility for their healthcare. Although well designed, constructed and intentioned, policy and privacy concerns have resulted in an eHealth model that may impact future health information sharing requirements. Thus an opportunity to transform the beleaguered Australian PCEHR into a sustainable on-demand technology consumption model for patient safety must be explored further. Moreover, the current clerical focus of healthcare practitioners must be renegotiated to establish a shared knowledge creation landscape of action for safer patient interventions. To achieve this potential however requires a platform that will facilitate efficient and trusted unification of all health information available in real-time across the continuum of care. As a conceptual paper, the goal of the authors is to deliver insights into the antecedents of usage influencing superior patient outcomes within an eHealth-as-a-Service framework. To achieve this, the paper attempts to distil key concepts and identify common themes drawn from a preliminary literature review of eHealth and cloud computing concepts, specifically cloud service orchestration to establish a conceptual framework and a research agenda. Initial findings support the authors’ view that an eHealth-as-a-Service (eHaaS) construct will serve as a disruptive paradigm shift in the aggregation and transformation of health information for use as real-world knowledge in patient care scenarios. Moreover, the strategic value of extending the community Health Record Bank (HRB) model lies in the ability to automatically draw on a multitude of relevant data repositories and sources to create a single source of practice based evidence and to engage market forces to create financial sustainability.
Resumo:
This paper translates the concepts of sustainable production to three dimensions of economic, environmental and ecological sustainability to analyze optimal production scales by solving optimizing problems. Economic optimization seeks input-output combinations to maximize profits. Environmental optimization searches for input-output combinations that minimize the polluting effects of materials balance on the surrounding environment. Ecological optimization looks for input-output combinations that minimize the cumulative destruction of the entire ecosystem. Using an aggregate space, the framework illustrates that these optimal scales are often not identical because markets fail to account for all negative externalities. Profit-maximizing firms normally operate at the scales which are larger than optimal scales from the viewpoints of environmental and ecological sustainability; hence policy interventions are favoured. The framework offers a useful tool for efficiency studies and policy implication analysis. The paper provides an empirical investigation using a data set of rice farms in South Korea.
Resumo:
This article integrates the material/energy flow analysis into a production frontier framework to quantify resource efficiency (RE). The emergy content of natural resources instead of their mass content is used to construct aggregate inputs. Using the production frontier approach, aggregate inputs will be optimised relative to given output quantities to derive RE measures. This framework is superior to existing RE indicators currently used in the literature. Using the exergy/emergy content in constructing aggregate material or energy flows overcomes a criticism that mass content cannot be used to capture different quality of differing types of resources. Derived RE measures are both ‘qualitative’ and ‘quantitative’, whereas existing RE indicators are only qualitative. An empirical examination into the RE of 116 economies was undertaken to illustrate the practical applicability of the new framework. The results showed that economies, on average, could reduce the consumption of resources by more than 30% without any reduction in per capita gross domestic product (GDP). This calculation occurred after adjustments for differences in the purchasing power of national currencies. The existence of high variations in RE across economies was found to be positively correlated with participation of people in labour force, population density, urbanisation, and GDP growth over the past five years. The results also showed that economies of a higher income group achieved higher RE, and those economies that are more dependent on imports and primary industries would have lower RE performance.
Resumo:
Engagement has emerged as important concept in public relations, as stakeholders challenge the discourse of organizational primacy and organizations prioritize the need for authentic stakeholder involvement. As a multidimensional concept, engagement offers a foundation for building organizational relationships, and provides a means to facilitate community–organization interaction. This special issue on engagement and public relations presents a body of work that both explicates and expands the theoretical foundations of engagement, and contributes to scholarly understanding of its contexts, processes, and outcomes.
Resumo:
This paper demonstrates a renewed procedure for the quantification of surface-enhanced Raman scattering (SERS) enhancement factors with improved precision. The principle of this method relies on deducting the resonance Raman scattering (RRS) contribution from surface-enhanced resonance Raman scattering (SERRS) to end up with the surface enhancement (SERS) effect alone. We employed 1,8,15,22-tetraaminophthalocyanato-cobalt(II) (4α-CoIITAPc), a resonance Raman- and electrochemically redox-active chromophore, as a probe molecule for RRS and SERRS experiments. The number of 4α-CoIITAPc molecules contributing to RRS and SERRS phenomena on plasmon inactive glassy carbon (GC) and plasmon active GC/Au surfaces, respectively, has been precisely estimated by cyclic voltammetry experiments. Furthermore, the SERS substrate enhancement factor (SSEF) quantified by our approach is compared with the traditionally employed methods. We also demonstrate that the present approach of SSEF quantification can be applied for any kind of different SERS substrates by choosing an appropriate laser line and probe molecule.
Resumo:
In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM–test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the S&P 500 stock index completes the paper.
Resumo:
Abnormal event detection has attracted a lot of attention in the computer vision research community during recent years due to the increased focus on automated surveillance systems to improve security in public places. Due to the scarcity of training data and the definition of an abnormality being dependent on context, abnormal event detection is generally formulated as a data-driven approach where activities are modeled in an unsupervised fashion during the training phase. In this work, we use a Gaussian mixture model (GMM) to cluster the activities during the training phase, and propose a Gaussian mixture model based Markov random field (GMM-MRF) to estimate the likelihood scores of new videos in the testing phase. Further-more, we propose two new features: optical acceleration, and the histogram of optical flow gradients; to detect the presence of any abnormal objects and speed violations in the scene. We show that our proposed method outperforms other state of the art abnormal event detection algorithms on publicly available UCSD dataset.
Resumo:
In the past few years, there has been a steady increase in the attention, importance and focus of green initiatives related to data centers. While various energy aware measures have been developed for data centers, the requirement of improving the performance efficiency of application assignment at the same time has yet to be fulfilled. For instance, many energy aware measures applied to data centers maintain a trade-off between energy consumption and Quality of Service (QoS). To address this problem, this paper presents a novel concept of profiling to facilitate offline optimization for a deterministic application assignment to virtual machines. Then, a profile-based model is established for obtaining near-optimal allocations of applications to virtual machines with consideration of three major objectives: energy cost, CPU utilization efficiency and application completion time. From this model, a profile-based and scalable matching algorithm is developed to solve the profile-based model. The assignment efficiency of our algorithm is then compared with that of the Hungarian algorithm, which does not scale well though giving the optimal solution.
Resumo:
Driver training is one of the interventions aimed at mitigating the number of crashes that involve novice drivers. Our failure to understand what is really important for learners, in terms of risky driving, is one of the many drawbacks restraining us to build better training programs. Currently, there is a need to develop and evaluate Advanced Driving Assistance Systems that could comprehensively assess driving competencies. The aim of this paper is to present a novel Intelligent Driver Training System (IDTS) that analyses crash risks for a given driving situation, providing avenues for improvement and personalisation of driver training programs. The analysis takes into account numerous variables acquired synchronously from the Driver, the Vehicle and the Environment (DVE). The system then segments out the manoeuvres within a drive. This paper further presents the usage of fuzzy set theory to develop the safety inference rules for each manoeuvre executed during the drive. This paper presents a framework and its associated prototype that can be used to comprehensively view and assess complex driving manoeuvres and then provide a comprehensive analysis of the drive used to give feedback to novice drivers.
Resumo:
Background Multi attribute utility instruments (MAUIs) are preference-based measures that comprise a health state classification system (HSCS) and a scoring algorithm that assigns a utility value to each health state in the HSCS. When developing a MAUI from a health-related quality of life (HRQOL) questionnaire, first a HSCS must be derived. This typically involves selecting a subset of domains and items because HRQOL questionnaires typically have too many items to be amendable to the valuation task required to develop the scoring algorithm for a MAUI. Currently, exploratory factor analysis (EFA) followed by Rasch analysis is recommended for deriving a MAUI from a HRQOL measure. Aim To determine whether confirmatory factor analysis (CFA) is more appropriate and efficient than EFA to derive a HSCS from the European Organisation for the Research and Treatment of Cancer’s core HRQOL questionnaire, Quality of Life Questionnaire (QLQ-C30), given its well-established domain structure. Methods QLQ-C30 (Version 3) data were collected from 356 patients receiving palliative radiotherapy for recurrent/metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter informed by the established QLQ-C30 structure and views of both patients and clinicians on which are the most relevant items. Dimensions determined by EFA or CFA were then subjected to Rasch analysis. Results CFA results generally supported the proposed QLQ-C30 structure (comparative fit index =0.99, Tucker–Lewis index =0.99, root mean square error of approximation =0.04). EFA revealed fewer factors and some items cross-loaded on multiple factors. Further assessment of dimensionality with Rasch analysis allowed better alignment of the EFA dimensions with those detected by CFA. Conclusion CFA was more appropriate and efficient than EFA in producing clinically interpretable results for the HSCS for a proposed new cancer-specific MAUI. Our findings suggest that CFA should be recommended generally when deriving a preference-based measure from a HRQOL measure that has an established domain structure.
Resumo:
Heterogeneous health data is a critical issue when managing health information for quality decision making processes. In this paper we examine the efficient aggregation of lifestyle information through a data warehousing architecture lens. We present a proof of concept for a clinical data warehouse architecture that enables evidence based decision making processes by integrating and organising disparate data silos in support of healthcare services improvement paradigms.
Resumo:
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function (CPDF) is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.
Resumo:
Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making