257 resultados para Ontological proof
Resumo:
This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
Resumo:
A Switch-Mode Assisted Linear Amplifier (SMALA) combines the high quality of a linear amplifier required for audio applications with the high efficiency of a switch-mode amplifier. The careful choice of current sense point and switch placement allows a simple non-isolated hysteresis current controller for the switch-mode section. This paper explains the extension of the hysteresis current controller for the control of a three level Neutral Point Clamped (NPC) converter, with simulations as proof of concept. The NPC topology allows the use of lower voltage switches and lower switching frequencies to implement high power audio amplifiers using the SMALA topology.
Resumo:
The mining equipment technology services sector is driven by a reactive and user-centered design approach, with a technological focus on incremental new product development. As Australia moves out of its sustained mining boom, companies need to rethink their strategic position, to become agile to stay relevant in an enigmatic market. This paper reports on the first five months on an embedded case study within an Australian, family-owned mining manufacturer. The first author is currently engaged in a longitudinal design led innovation project, as a catalyst to guide the company’s journey to design integration. The results find that design led innovation could act as a channel for highlighting and exploring company disconnections with the marketplace and offer a customer-centric catalyst for internal change. Data collected for this study is from 12 analysed semistructured interviews, a focus group and a reflective journal, over a five-month period. This paper explores limitations to design integration, and highlights opportunities to explore and leverage entrepreneurial characteristics to stay agile, broaden innovation and future-proof through the next commodity cycle in the mining industry.
Resumo:
Information privacy is a critical success/failure factor in information technology supported healthcare (eHealth). eHealth systems utilise electronic health records (EHR) as the main source of information, thus, implementing appropriate privacy preserving methods for EHRs is vital for the proliferation of eHealth. Whilst information privacy may be a fundamental requirement for eHealth consumers, healthcare professionals demand non-restricted access to patient information for improved healthcare delivery, thus, creating an environment where stakeholder requirements are contradictory. Therefore, there is a need to achieve an appropriate balance of requirements in order to build successful eHealth systems. Towards achieving this balance, a new genre of eHealth systems called Accountable-eHealth (AeH) systems has been proposed. In this paper, an access control model for EHRs is presented that can be utilised by AeH systems to create information usage policies that fulfil both stakeholders’ requirements. These policies are used to accomplish the aforementioned balance of requirements creating a satisfactory eHealth environment for all stakeholders. The access control model is validated using a Web based prototype as a proof of concept.
Resumo:
Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.
Resumo:
This paper conceptualizes knowledge governance (KG) in project-based organizations (PBOs) and its methodological approaches for empirical investigation. Three key contributions towards a multi-faceted view of KG and an understanding of KG in PBOs are advanced. These contributions include a definition of KG in PBOs, a conceptual framework to investigate KG and a methodological framework for empirical inquiry into KG in PBO settings. Our definition highlights the contingent nature of KG processes in relation to their organizational context. The conceptual framework addresses macro- and micro-level elements of KG and their interaction. The methodological framework proposes five different research approaches, structured by differentiation and integration of various ontological and epistemological stances. Together these contributions provide a novel platform for understanding KG in PBOs and developing new insights into the design and execution of research on KG within PBOs.
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
The mechanistic details of the pathogenesis of Chlamydia, an obligate intracellular pathogen of global importance, have eluded scientists due to the scarcity of traditional molecular genetic tools to investigate this organism. Here we report a chemical biology strategy that has uncovered the first essential protease for this organism. Identification and application of a unique CtHtrA inhibitor (JO146) to cultures of Chlamydia resulted in a complete loss of viable elementary body formation. JO146 treatment during the replicative phase of development resulted in a loss of Chlamydia cell morphology, diminishing inclusion size, and ultimate loss of inclusions from the host cells. This completely prevented the formation of viable Chlamydia elementary bodies. In addition to its effect on the human C. trachomatis strain, JO146 inhibited the viability of the mouse strain, Chlamydia muridarum, both in vitro and in vivo. Thus, we report a chemical biology approach to establish an essential role for Chlamydia CtHtrA. The function of CtHtrA for Chlamydia appears to be essential for maintenance of cell morphology during replicative the phase and these findings provide proof of concept that proteases can be targetted for anti-microbial therapy for intracellular pathogens.
Resumo:
The notion of plaintext awareness ( PA ) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen-ciphertext attacks ( IND-CCA ), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving PA in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving PA of schemes in one model cannot be adapted to the other model. Existing research addresses PA in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of PA in proof strategies of IND-CCA security, and explores relationships between PA and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and IND-CCA -secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of γ -uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs), this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is PA and IND-CCA -secure by applying our construction to a popular IB-KEM and optimizing it for better performance.
Resumo:
This study of English Coronial practice raises a number of questions about the role played by the Coroner within contemporary governance. Following observations at over 20 inquests into possible suicides and in-depth interviews with six Coroners, three preliminary issue emerged, all of which pointed to a broader and, in many ways, more significant issue. These preliminary issues are concerned with: (1) the existence of considerable slippages between different Coroners over which deaths are likely to be classified as suicide; (2) the high standard of proof required and immense pressure faced by Coroners from family members at inquest to reach any verdict other than suicide, which significantly depresses likely suicide rates, and; (3) Coroners feeling no professional obligation, either individually or collectively, to contribute to the production of consistent and useful social data regarding suicide, arguably rendering comparative suicide statistics relatively worthless. These concerns lead, ultimately, to the second more important question about the role expected of Coroners within social governance and within an effective, contemporary democracy. That is, are Coroners the principal officers in the public administration of death; or are they, first and foremost, a crucial part of the grieving process, one that provides important therapeutic interventions into the mental and emotional health of the community?
Resumo:
This paper explores the use of subarrays as array elements. Benefits of such a concept include improved gain in any direction without significantly increasing the overall size of the array and enhanced pattern control. The architecture for an array of subarrays will be discussed via a systems approach. Individual system designs are explored in further details and proof of principle is illustrated through a manufactured examples.
Resumo:
Auto/biographical documentaries ask audiences to take a ‘leap of faith’, not being able to offer any real ‘proof’ of the people and events they claim to document, other than that of the film-maker’s saying this is what happened. With only memory and history seen through the distorting lens of time, ‘the authenticity of experience functions as a receding horizon of truth in which memory and testimony are articulated as modes of salvage’. Orchids: My Intersex Adventure follows a salvaging of the film-maker’s life events and experiences, being born with an intersex condition, and, via the filming and editing process, revolving around the core question: who am I? From this transformative creative documentary practice evolves a new way of embodying experience and ‘seeing’, playfully dubbed here as the ‘intersex gaze’.
Resumo:
In this paper, a demand-responsive decision support system is proposed by integrating the operations of coal shipment, coal stockpiles and coal railing within a whole system. A generic and flexible scheduling optimisation methodology is developed to identify, represent, model, solve and analyse the coal transport problem in a standard and convenient way. As a result, the integrated train-stockpile-ship timetable is created and optimised for improving overall efficiency of coal transport system. A comprehensive sensitivity analysis based on extensive computational experiments is conducted to validate the proposed methodology. The mathematical proposition and proof are concluded as technical and insightful advices for industry practice. The proposed methodology provides better decision making on how to assign rail rolling-stocks and upgrade infrastructure in order to significantly improve capacity utilisation with the best resource-effectiveness ratio. The proposed decision support system with train-stockpile-ship scheduling optimisation techniques is promising to be applied in railway or mining industry, especially as a useful quantitative decision making tool on how to use more current rolling-stocks or whether to buy additional rolling-stocks for mining transportation.
Resumo:
The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to various regulations that require data and operations to reside in specific geographic locations. Thus, cloud users may want to be sure that their stored data have not been relocated into unknown geographic regions that may compromise the security of their stored data. Albeshri et al. (2012) combined proof of storage (POS) protocols with distance-bounding protocols to address this problem. However, their scheme involves unnecessary delay when utilising typical POS schemes due to computational overhead at the server side. The aim of this paper is to improve the basic GeoProof protocol by reducing the computation overhead at the server side. We show how this can maintain the same level of security while achieving more accurate geographic assurance.
Resumo:
Many cell types form clumps or aggregates when cultured in vitro through a variety of mechanisms including rapid cell proliferation, chemotaxis, or direct cell-to-cell contact. In this paper we develop an agent-based model to explore the formation of aggregates in cultures where cells are initially distributed uniformly, at random, on a two-dimensional substrate. Our model includes unbiased random cell motion, together with two mechanisms which can produce cell aggregates: (i) rapid cell proliferation, and (ii) a biased cell motility mechanism where cells can sense other cells within a finite range, and will tend to move towards areas with higher numbers of cells. We then introduce a pair-correlation function which allows us to quantify aspects of the spatial patterns produced by our agent-based model. In particular, these pair-correlation functions are able to detect differences between domains populated uniformly at random (i.e. at the exclusion complete spatial randomness (ECSR) state) and those where the proliferation and biased motion rules have been employed - even when such differences are not obvious to the naked eye. The pair-correlation function can also detect the emergence of a characteristic inter-aggregate distance which occurs when the biased motion mechanism is dominant, and is not observed when cell proliferation is the main mechanism of aggregate formation. This suggests that applying the pair-correlation function to experimental images of cell aggregates may provide information about the mechanism associated with observed aggregates. As a proof of concept, we perform such analysis for images of cancer cell aggregates, which are known to be associated with rapid proliferation. The results of our analysis are consistent with the predictions of the proliferation-based simulations, which supports the potential usefulness of pair correlation functions for providing insight into the mechanisms of aggregate formation.