83 resultados para quantization artifacts


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the use of visual artifacts to represent a complex adaptive system (CAS). The integrated master schedule (IMS) is one of those visuals widely used in complex projects for scheduling, budgeting, and project management. In this paper, we discuss how the IMS outperforms the traditional timelines and acts as a ‘multi-level and poly-temporal boundary object’ that visually represents the CAS. We report the findings of a case study project on the way the IMS mapped interactions, interdependencies, constraints and fractal patterns in a complex project. Finally, we discuss how the IMS was utilised as a complex boundary object by eliciting commitment and development of shared mental models, and facilitating negotiation through the layers of multiple interpretations from stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies have started to explore context-awareness as a driver in the design of adaptable business processes. The emerging challenge of identifying and considering contextual drivers in the environment of a business process are well understood, however, typical methods used in business process modeling do not yet consider this additional contextual information in their process designs. In this chapter, we describe our research towards innovative and advanced process modeling methods that include mechanisms to incorporate relevant contextual drivers and their impacts on business processes in process design models. We report on our ongoing work with an Australian insurance provider and describe the design science we employed to develop these innovative and useful artifacts as part of a context-aware method framework. We discuss the utility of these artifacts in an application in the claims handling process at the case organization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Though the value of a process-centred view for the understanding and (re-)design of corporations has been widely accepted, our understanding of the research process in Information Systems (IS) remains superficial. A process-centred view on IS research considers the conduct of a research project as a sequence of activities involving resources, data and research artifacts. As such, it helps to reflect on more effective ways to conduct IS research, to consolidate and compare diverse practices and to complement the focus on research methodologies with research project practices. This paper takes a first step towards the discipline of ‘Research Process Management’ by exploring the features of research processes and by presenting a preliminary approach for research process design that can facilitate modelling IS research. The case study method and the design science research method are used as examples to demonstrate the potential of such reference research process models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Evolutionary biologists are often misled by convergence of morphology and this has been common in the study of bird evolution. However, the use of molecular data sets have their own problems and phylogenies based on short DNA sequences have the potential to mislead us too. The relationships among clades and timing of the evolution of modern birds (Neoaves) has not yet been well resolved. Evidence of convergence of morphology remain controversial. With six new bird mitochondrial genomes (hummingbird, swift, kagu, rail, flamingo and grebe) we test the proposed Metaves/Coronaves division within Neoaves and the parallel radiations in this primary avian clade. Results Our mitochondrial trees did not return the Metaves clade that had been proposed based on one nuclear intron sequence. We suggest that the high number of indels within the seventh intron of the β-fibrinogen gene at this phylogenetic level, which left a dataset with not a single site across the alignment shared by all taxa, resulted in artifacts during analysis. With respect to the overall avian tree, we find the flamingo and grebe are sister taxa and basal to the shorebirds (Charadriiformes). Using a novel site-stripping technique for noise-reduction we found this relationship to be stable. The hummingbird/swift clade is outside the large and very diverse group of raptors, shore and sea birds. Unexpectedly the kagu is not closely related to the rail in our analysis, but because neither the kagu nor the rail have close affinity to any taxa within this dataset of 41 birds, their placement is not yet resolved. Conclusion Our phylogenetic hypothesis based on 41 avian mitochondrial genomes (13,229 bp) rejects monophyly of seven Metaves species and we therefore conclude that the members of Metaves do not share a common evolutionary history within the Neoaves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study we have found that NMR detectability of 39K in rat thigh muscle may be substantially higher (up to 100% oftotal tissue potassium) than values previously reported of around 40%. The signal was found to consist of two superimposed components, one broad and one narrow, of approximately equal area. Investigations involving improvements in spectral parameters such as signal-to-noise ratio and baseline roll, together with computer simulations of spectra, show that the quality of the spectra has a major effect on the amount of signal detected, which is largely due to the loss of detectability of the broad signal component. In particular, lower-field spectrometers using conventional probes and detection methods generally have poorer signal-to-noise and worse baseline roll artifacts, which make detection of a broad component of the muscle signal difficult.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, de-regulation in the airline industry and the introduction of low-cost carriers have conspired to produce significant changes in the airport landscape. From an airport operator’s perspective, one of the most notable has been the shift of capital revenue from traditional airline sources (through exclusive use, long term lease arrangements) to passengers (by way of fees collected from ticket sales). As a result of these developments, passengers have become recognized as major stakeholders who have the power to influence airport profitability. This link between passenger satisfaction and profitability has generated industry wide interest in the “passenger experience”. In this paper, we define the factors which influence passenger experience, namely (a) artifacts, (b) services and (c) the terminal building, and explore the challenges that exist in the current approaches to terminal design. On the basis of these insights, we propose a conceptual model of passenger experience, and motivate its use as a framework for further research into improving terminal design from a passenger oriented perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, ‘business model’ and ‘business model innovation’ have gained substantial attention in management literature and practice. However, many firms lack the capability to develop a novel business model to capture the value from new technologies. Existing literature on business model innovation highlights the central role of ‘customer value’. Further, it suggests that firms need to experiment with different business models and engage in ‘trail-and-error’ learning when participating in business model innovation. Trial-and error processes and prototyping with tangible artifacts are a fundamental characteristic of design. This conceptual paper explores the role of design-led innovation in facilitating firms to conceive and prototype novel and meaningful business models. It provides a brief review of the conceptual discussion on business model innovation and highlights the opportunities for linking it with the research stream of design-led innovation. We propose design-led business model innovation as a future research area and highlight the role of design-led prototyping and new types of artifacts and prototypes play within it. We present six propositions in order to outline future research avenues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Camp Kilda (CK) is regarded as being a quality early childhood center, and has many features you would typically expect to see in settings across Australia. The children are busily engaged in hands-on activity, playing indoors and outdoors, in the sandpit, under the shade of a big mango tree. The learning environment is planned to offer a variety of activities, including dramatic play, climbing equipment, balls, painting, drawing, clay, books, blocks, writing materials, scissors, manipulative materials. The children are free to access all the materials, and they play either individually or in small groups. The teachers encourage and stimulate the children’s learning, through interactions and thoughtful planning. Learning and assessment at CK is embedded within the cultural and social contexts of the children and their community. Children’s learning is made visible through a rich variety of strategies, including recorded observations, work samples, photographs, and other artifacts. Parents are actively encouraged to build on these “stories” of their children. Planning is based around the teachers’ analysis of the information they gather daily as they interact with the children and their families.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing demand for mobile video has attracted much attention from both industry and researchers. To satisfy users and to facilitate the usage of mobile video, providing optimal quality to the users is necessary. As a result, quality of experience (QoE) becomes an important focus in measuring the overall quality perceived by the end-users, from the aspects of both objective system performance and subjective experience. However, due to the complexity of user experience and diversity of resources (such as videos, networks and mobile devices), it is still challenging to develop QoE models for mobile video that can represent how user-perceived value varies with changing conditions. Previous QoE modelling research has two main limitations: aspects influencing QoE are insufficiently considered; and acceptability as the user value is seldom studied. Focusing on the QoE modelling issues, two aims are defined in this thesis: (i) investigating the key influencing factors of mobile video QoE; and (ii) establishing QoE prediction models based on the relationships between user acceptability and the influencing factors, in order to help provide optimal mobile video quality. To achieve the first goal, a comprehensive user study was conducted. It investigated the main impacts on user acceptance: video encoding parameters such as quantization parameter, spatial resolution, frame rate, and encoding bitrate; video content type; mobile device display resolution; and user profiles including gender, preference for video content, and prior viewing experience. Results from both quantitative and qualitative analysis revealed the significance of these factors, as well as how and why they influenced user acceptance of mobile video quality. Based on the results of the user study, statistical techniques were used to generate a set of QoE models that predict the subjective acceptability of mobile video quality by using a group of the measurable influencing factors, including encoding parameters and bitrate, content type, and mobile device display resolution. Applying the proposed QoE models into a mobile video delivery system, optimal decisions can be made for determining proper video coding parameters and for delivering most suitable quality to users. This would lead to consistent user experience on different mobile video content and efficient resource allocation. The findings in this research enhance the understanding of user experience in the field of mobile video, which will benefit mobile video design and research. This thesis presents a way of modelling QoE by emphasising user acceptability of mobile video quality, which provides a strong connection between technical parameters and user-desired quality. Managing QoE based on acceptability promises the potential for adapting to the resource limitations and achieving an optimal QoE in the provision of mobile video content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study we develop a theorization of an Internet dating site as a cultural artifact. The site, Gaydar, is targeted at gay men. We argue that contemporary received representations of their sexuality figure heavily in the site’s focus by providing a cultural logic for the apparent ad hoc development trajectories of its varied commercial and non-­‐commercial services. More specifically, we suggest that the growing sets of services related to the website are heavily enmeshed within current social practices and meanings. These practices and meanings are, in turn, shaped by the interactions and preferences of a variety of diverse groups involved in what is routinely seen within the mainstream literature as a singularly specific sexuality and cultural project. Thus, we attend to two areas – the influence of the various social engagements associated with Gaydar together with the further extension of its trajectory ‘beyond the web’. Through the case of Gaydar, we contribute a study that recognizes the need for attention to sexuality in information systems research and one which illustrates sexuality as a pivotal aspect of culture. We also draw from anthropology to theorize ICTs as cultural artifacts and provide insights into the contemporary phenomena of ICT enabled social networking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies have started to explore context-awareness as a driver in the design of adaptable business processes. The emerging challenge of identifying and considering contextual drivers in the environment of a business process are well understood, however, typical methods and models for business process design do not yet consider this context. In this paper, we describe our work on the design of a method framework and appropriate models to enable a context-aware process design approach. We report on our ongoing work with an Australian insurance provider and describe the design science we employed to develop innovative and useful artifacts as part of a context-aware method framework. We discuss the utility of these artifacts in an application in the claims handling process at the case organization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye’s normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data.