358 resultados para Artificial Information Models
Resumo:
While the Probability Ranking Principle for Information Retrieval provides the basis for formal models, it makes a very strong assumption regarding the dependence between documents. However, it has been observed that in real situations this assumption does not always hold. In this paper we propose a reformulation of the Probability Ranking Principle based on quantum theory. Quantum probability theory naturally includes interference effects between events. We posit that this interference captures the dependency between the judgement of document relevance. The outcome is a more sophisticated principle, the Quantum Probability Ranking Principle, that provides a more sensitive ranking which caters for interference/dependence between documents’ relevance.
Resumo:
There is a wide variety of drivers for business process modelling initiatives, reaching from business evolution and process optimisation over compliance checking and process certification to process enactment. That, in turn, results in models that differ in content due to serving different purposes. In particular, processes are modelled on different abstraction levels and assume different perspectives. Vertical alignment of process models aims at handling these deviations. While the advantages of such an alignment for inter-model analysis and change propagation are out of question, a number of challenges has still to be addressed. In this paper, we discuss three main challenges for vertical alignment in detail. Against this background, the potential application of techniques from the field of process integration is critically assessed. Based thereon, we identify specific research questions that guide the design of a framework for model alignment.
Resumo:
Western economies are highly dependent on service innovation for their growth and employment. An important driver for economic growth is, therefore, the development of new, innovative services like electronic services, mobile end-user services, new financial or personalized services. Service innovation joins four trends that currently shape the western economies: the growing importance of services, the need for innovation, changes in consumer and business markets, and the advancements in information and communication technology (ICT).
Resumo:
Objective: To examine the effects of personal and community characteristics, specifically race and rurality, on lengths of state psychiatric hospital and community stays using maximum likelihood survival analysis with a special emphasis on change over a ten year period of time. Data Sources: We used the administrative data of the Virginia Department of Mental Health, Mental Retardation, and Substance Abuse Services (DMHMRSAS) from 1982-1991 and the Area Resources File (ARF). Given these two sources, we constructed a history file for each individual who entered the state psychiatric system over the ten year period. Histories included demographic, treatment, and community characteristics. Study Design: We used a longitudinal, population-based design with maximum likelihood estimation of survival models. We presented a random effects model with unobserved heterogeneity that was independent of observed covariates. The key dependent variables were lengths of inpatient stay and subsequent length of community stay. Explanatory variables measured personal, diagnostic, and community characteristics, as well as controls for calendar time. Data Collection: This study used secondary, administrative, and health planning data. Principal Findings: African-American clients leave the community more quickly than whites. After controlling for other characteristics, however, race does not affect hospital length of stay. Rurality does not affect length of community stays once other personal and community characteristics are controlled for. However, people from rural areas have longer hospital stays even after controlling for personal and community characteristics. The effects of time are significantly smaller than expected. Diagnostic composition effects and a decrease in the rate of first inpatient admissions explain part of this reduced impact of time. We also find strong evidence for the existence of unobserved heterogeneity in both types of stays and adjust for this in our final models. Conclusions: Our results show that information on client characteristics available from inpatient stay records is useful in predicting not only the length of inpatient stay but also the length of the subsequent community stay. This information can be used to target increased discharge planning for those at risk of more rapid readmission to inpatient care. Correlation across observed and unobserved factors affecting length of stay has significant effects on the measurement of relationships between individual factors and lengths of stay. Thus, it is important to control for both observed and unobserved factors in estimation.
Resumo:
This paper assesses whether incorporating investor sentiment as conditioning information in asset-pricing models helps capture the impacts of the size, value, liquidity and momentum effects on risk-adjusted returns of individual stocks. We use survey sentiment measures and a composite index as proxies for investor sentiment. In our conditional framework, the size effect becomes less important in the conditional CAPM and is no longer significant in all the other models examined. Furthermore, the conditional models often capture the value, liquidity and momentum effects.
Resumo:
This paper investigates compressed sensing using hidden Markov models (HMMs) and hence provides an extension of recent single frame, bounded error sparse decoding problems into a class of sparse estimation problems containing both temporal evolution and stochastic aspects. This paper presents two optimal estimators for compressed HMMs. The impact of measurement compression on HMM filtering performance is experimentally examined in the context of an important image based aircraft target tracking application. Surprisingly, tracking of dim small-sized targets (as small as 5-10 pixels, with local detectability/SNR as low as − 1.05 dB) was only mildly impacted by compressed sensing down to 15% of original image size.
Resumo:
The world of Construction is changing, so too are the expectations of stakeholders regarding strategies for adapting existing resources (people, equipment and finances), processes and tools to the evolving needs of the industry. Building Information Modelling (BIM) is a data-rich, digital approach for representing building information required for design and construction. BIM tools play a crucial role and are instrumental to current approaches, by industry stakeholders, aimed at harnessing the power of a single information repository for improved project delivery and maintenance. Yet, building specifications - which document information on material quality, and workmanship requirements - remain distinctly separate from model information typically represented in BIM models. BIM adoption for building design, construction and maintenance is an industry-wide strategy aimed at addressing such concerns about information fragmentation. However, to effectively reduce inefficiencies due to fragmentation, BIM models require crucial building information contained in specifications. This paper profiles some specification tools which have been used in industry as a means of bridging the BIM-Specifications divide. We analyse the distinction between current attempts at integrating BIM and specifications and our approach which utilizes rich specification information embedded within objects in a product library as a method for improving the quality of information contained in BIM objects at various levels of model development.
Resumo:
Objective To describe women’s reports of the model of care options General Practitioners (GPs) discussed with them at the first pregnancy consultation and women’s self-reported role in decisionmaking about model of care. Methods Women who had recently given birth responded to survey items about the models of care GPs discussed, their role in final decision-making, and socio-demographic, obstetric history, and early pregnancy characteristics. Results The proportion of women with whom each model of care was discussed varied between 8.2% (for private midwifery care with home birth) and 64.4% (GP shared care). Only 7.7% of women reported that all seven models were discussed. Exclusive discussion about private obstetric care and about all public models was common, and women’s health insurance status was the strongest predictor of the presence of discussions about each model. Most women (82.6%) reported active involvement in final decision-making about model of care. Conclusion Although most women report involvement in maternity model of care decisions, they remain largely uninformed about the breadth of available model of care options. Practical implications Strategies that facilitate women’s access to information on the differentiating features and outcomes for all models of care should be prioritized to better ensure equitable and quality decisions.
Resumo:
The business model concept is gaining traction in different disciplines but is still criticized for being fuzzy and vague and lacking consensus on its definition and compositional elements. In this paper we set out to advance our understanding of the business model concept by addressing three areas of foundational research: business model definitions, business model elements, and business model archetypes. We define a business model as a representation of the value logic of an organization in terms of how it creates and captures customer value. This abstract and generic definition is made more specific and operational by the compositional elements that need to address the customer, value proposition, organizational architecture (firm and network level) and economics dimensions. Business model archetypes complement the definition and elements by providing a more concrete and empirical understanding of the business model concept. The main contributions of this paper are (1) explicitly including the customer value concept in the business model definition and focussing on value creation, (2) presenting four core dimensions that business model elements need to cover, (3) arguing for flexibility by adapting and extending business model elements to cater for different purposes and contexts (e.g. technology, innovation, strategy),(4) stressing a more systematic approach to business model archetypes by using business model elements for their description, and (5) suggesting to use business model archetype research for the empirical exploration and testing of business model elements and their relationships.
Resumo:
Computational models in physiology often integrate functional and structural information from a large range of spatio-temporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and scepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace and refine animal experiments. A fundamental requirement to fulfil these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations between experiments, models and simulations in cardiac electrophysiology. We describe the processes, data and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. Validation must therefore take into account the complex interplay between models, simulations and experiments. Key points for developing strategies for validation are: 1) understanding sources of bio-variability is crucial to the comparison between simulation and experimental results; 2) robustness of techniques and tools is a pre-requisite to conducting physiological investigations using the MSE system; 3) definition and adoption of standards facilitates interoperability of experiments, models and simulations; 4) physiological validation must be understood as an iterative process that defines the specific aspects of electrophysiology the MSE system targets, and is driven by advancements in experimental and computational methods and the combination of both.
Resumo:
This paper investigates the effect of topic dependent language models (TDLM) on phonetic spoken term detection (STD) using dynamic match lattice spotting (DMLS). Phonetic STD consists of two steps: indexing and search. The accuracy of indexing audio segments into phone sequences using phone recognition methods directly affects the accuracy of the final STD system. If the topic of a document in known, recognizing the spoken words and indexing them to an intermediate representation is an easier task and consequently, detecting a search word in it will be more accurate and robust. In this paper, we propose the use of TDLMs in the indexing stage to improve the accuracy of STD in situations where the topic of the audio document is known in advance. It is shown that using TDLMs instead of the traditional general language model (GLM) improves STD performance according to figure of merit (FOM) criteria.
Resumo:
The aim of spoken term detection (STD) is to find all occurrences of a specified query term in a large audio database. This process is usually divided into two steps: indexing and search. In a previous study, it was shown that knowing the topic of an audio document would help to improve the accuracy of indexing step which results in a better performance for STD system. In this paper, we propose the use of topic information not only in the indexing step, but also in the search step. Results of our experiments show that topic information could also be used in search step to improve the STD accuracy.
Resumo:
Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.
Resumo:
It is often said that Australia is a world leader in rates of copyright infringement for entertainment goods. In 2012, the hit television show, Game of Thrones, was the most downloaded television show over bitorrent, and estimates suggest that Australians accounted for a plurality of nearly 10% of the 3-4 million downloads each week. The season finale of 2013 was downloaded over a million times within 24 hours of its release, and again Australians were the largest block of illicit downloaders over BitTorrent, despite our relatively small population. This trend has led the former US Ambassador to Australia to implore Australians to stop 'stealing' digital content, and rightsholders to push for increasing sanctions on copyright infringers. The Australian Government is looking to respond by requiring Internet Service Providers to issue warnings and potentially punish consumers who are alleged by industry groups to have infringed copyright. This is the logical next step in deterring infringement, given that the operators of infringing networks (like The Pirate Bay, for example) are out of regulatory reach. This steady ratcheting up of the strength of copyright, however, comes at a significant cost to user privacy and autonomy, and while the decentralisation of enforcement reduces costs, it also reduces the due process safeguards provided by the judicial process. This article presents qualitative evidence that substantiates a common intuition: one of the major reasons that Australians seek out illicit downloads of content like Game of Thrones in such numbers is that it is more difficult to access legitimately in Australia. The geographically segmented way in which copyright is exploited at an international level has given rise to a ‘tyranny of digital distance’, where Australians have less access to copyright goods than consumers in other countries. Compared to consumers in the US and the EU, Australians pay more for digital goods, have less choice in distribution channels, are exposed to substantial delays in access, and are sometimes denied access completely. In this article we focus our analysis on premium film and television offerings, like Game of Thrones, and through semi-structured interviews, explore how choices in distribution impact on the willingness of Australian consumers to seek out infringing copies of copyright material. Game of Thrones provides an excellent case study through which to frame this analysis: it is both one of the least legally accessible television offerings and one of the most downloaded through filesharing networks of recent times. Our analysis shows that at the same time as rightsholder groups, particularly in the film and television industries, are lobbying for stronger laws to counter illicit distribution, the business practices of their member organisations are counter-productively increasing incentives for consumers to infringe. The lack of accessibility and high prices of copyright goods in Australia leads to substantial economic waste. The unmet consumer demand means that Australian consumers are harmed by lower access to information and entertainment goods than consumers in other jurisdictions. The higher rates of infringement that fulfils some of this unmet demand increases enforcement costs for copyright owners and imposes burdens either on our judicial system or on private entities – like ISPs – who may be tasked with enforcing the rights of third parties. Most worryingly, the lack of convenient and cheap legitimate digital distribution channels risks undermining public support for copyright law. Our research shows that consumers blame rightsholders for failing to meet market demand, and this encourages a social norm that infringing copyright, while illegal, is not morally wrongful. The implications are as simple as they are profound: Australia should not take steps to increase the strength of copyright law at this time. The interests of the public and those of rightsholders align better when there is effective competition in distribution channels and consumers can legitimately get access to content. While foreign rightsholders are seeking enhanced protection for their interests, increasing enforcement is likely to increase their ability to engage in lucrative geographical price-discrimination, particularly for premium content. This is only likely to increase the degree to which Australian consumers feel that their interests are not being met and, consequently, to further undermine the legitimacy of copyright law. If consumers are to respect copyright law, increasing sanctions for infringement without enhancing access and competition in legitimate distribution channels could be dangerously counter-productive. We suggest that rightsholders’ best strategy for addressing infringement in Australia at this time is to ensure that Australians can access copyright goods in a timely, affordable, convenient, and fair lawful manner.