93 resultados para Meyer–Konig and Zeller Operators

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collaborative user-led content creation by online communities, or produsage (Bruns 2008), has generated a variety of useful and important resources and other valuable outcomes, from open source software through the Wikipedia to a variety of smaller-scale, specialist projects. These are often seen as standing in an inherent opposition to commercial interests, and attempts to develop collaborations between community content creators and commercial partners have had mixed success rates to date. However, such tension between community and commerce is not inevitable, and there is substantial potential for more fruitful exchanges and collaboration. This article contributes to the development of this understanding by outlining the key underlying principles of such participatory community processes and exploring the potential tensions which could arise between these communities and their potential external partners. It also sketches out potential approaches to resolving them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Food microstructure represents the way their elements arrangement and their interaction. Researchers in this field benefit from identifying new methods of examination of the microstructure and analysing the images. Experiments were undertaken to study micro-structural changes of food material during drying. Micro-structural images were obtained for potato samples of cubical shape at different moisture contents during drying using scanning electron microscopy. Physical parameters such as cell wall perimeter, and area were calculated using an image identification algorithm, based on edge detection and morphological operators. The algorithm was developed using Matlab.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airport efficiency is important because it has a direct impact on customer safety and satisfaction and therefore the financial performance and sustainability of airports, airlines, and affiliated service providers. This is especially so in a world characterized by an increasing volume of both domestic and international air travel, price and other forms of competition between rival airports, airport hubs and airlines, and rapid and sometimes unexpected changes in airline routes and carriers. It also reflects expansion in the number of airports handling regional, national, and international traffic and the growth of complementary airport facilities including industrial, commercial, and retail premises. This has fostered a steadily increasing volume of research aimed at modeling and providing best-practice measures and estimates of airport efficiency using mathematical and econometric frontiers. The purpose of this chapter is to review these various methods as they apply to airports throughout the world. Apart from discussing the strengths and weaknesses of the different approaches and their key findings, the paper also examines the steps faced by researchers as they move through the modeling process in defining airport inputs and outputs and the purported efficiency drivers. Accordingly, the chapter provides guidance to those conducting empirical research on airport efficiency and serves as an aid for aviation regulators and airport operators among others interpreting airport efficiency research outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to discuss residents’ views of social and physical environments in a co-housing and in a senior housing setting in Finland. Also, the study aims to point out important connections between well-being and built environment. Design/methodology/approach – The data include interviews and survey responses gathered in the cases. The results and analysis are presented at different case study levels, with the discussion and conclusions following this. Findings – The findings show that the physical environment and common areas have an important role to activate residents. When well-designed common areas exist, a higher level of engagement can be achieved by getting residents involved in the planning and running of activities. Research limitations/implications – This paper discusses residents’ experiences in two Finnish housing settings and it focuses on the housing market in Finland. Practical implications – The findings encourage investors and housing operators to design and invest common areas which could activate residents and create social contacts. Also, investors have to pay attention to the way these developments are managed. Originality/value – This study is the first to investigate the Finnish co-housing setting and compare social and physical environments in a co-housing and a senior house.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is much anecdotal evidence and academic argument that the location of a business influences its value. That is, some businesses appear to be worth more than others because of their location. This is particularly so in the tourism industry. Within the domain of the destination literature, many factors can be posited on why business valuation varies, ranging from access to markets, availability of labor, climate, and surrounding services. Given that business value is such a fundamental principle that underpins the viability of the tourist industry through its relationship with pricing, business acquisition, and investment, it is surprising that scant research has sought to quantify the relative premium associated with geographic locations. This study proposes a novel way in which to estimate geographic brand premium. Specifically, the approach translates valuation techniques from financial economics to quantify the incremental value derived from businesses operating in a particular geographic region, and produces a geographic brand premium. The article applies the technique to a well-known tourist destination in Australia, and the results are consistent with a positive value of brand equity in the key industries and are of a plausible order of magnitude. The article carries strong implications for business and tourism operators in terms of valuation, pricing, and investment, but more generally, the approach is potentially useful to local authorities and business associations when deciding how much resource and effort should be devoted to brand protection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper sets out to examine from published literature and crash data analyses whether alcohol in bicycle crashes is an issue about which we should be concerned. It discusses factors that have the potential to increase the number of bicycle crashes in which alcohol is involved (such growth in the size and diversity of the cyclist population, and balance and coordination demands) and factors which may reduce the importance of alcohol in bicycle crashes (such as time of data factors and child riders). It also examines data availability issues that contribute to difficulties in determining the true magnitude of the issue. Methods: This paper reviews previous research and reports analyses of data from Queensland, Australia, that examine the role of alcohol in Police-reported road crashes. In Queensland it is an offence to ride a bicycle or drive a motor vehicle with a BAC exceeding 0.05% (or lower for novice and professional drivers). Results: In the five years 2003-2007, alcohol was reported as involved in 165 bicycle crashes (4%). The bicycle rider was coded as “under the influence” or “over the prescribed BAC limit” in 15 were single unit crashes (12%). In multi-vehicle bicycle crashes, alcohol involvement was reported for 16 cyclists (0.4%) and 110 operators of other vehicles (3%). Additional analyses including characteristics of the cyclist crashes involving alcohol and the importance of missing data will be discussed in the paper. Conclusion: The increase in participation in cycling and the vulnerability of cyclists to injuries support the need to examine the role of alcohol in bicycle crashes. Current data suggest that alcohol on the part of the vehicle driver is a larger concern than alcohol on the part of the cyclist, but improvements in data collection are needed before more precise conclusions can be drawn.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a modified approach to evaluate access control policy similarity and dissimilarity based on the proposal by Lin et al. (2007). Lin et al.'s policy similarity approach is intended as a filter stage which identifies similar XACML policies that can be analysed further using more computationally demanding techniques based on model checking or logical reasoning. This paper improves the approach of computing similarity of Lin et al. and also proposes a mechanism to calculate a dissimilarity score by identifying related policies that are likely to produce different access decisions. Departing from the original algorithm, the modifications take into account the policy obligation, rule or policy combining algorithm and the operators between attribute name and value. The algorithms are useful in activities involving parties from multiple security domains such as secured collaboration or secured task distribution. The algorithms allow various comparison options for evaluating policies while retaining control over the restriction level via a number of thresholds and weight factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Characteristics of surveillance video generally include low resolution and poor quality due to environmental, storage and processing limitations. It is extremely difficult for computers and human operators to identify individuals from these videos. To overcome this problem, super-resolution can be used in conjunction with an automated face recognition system to enhance the spatial resolution of video frames containing the subject and narrow down the number of manual verifications performed by the human operator by presenting a list of most likely candidates from the database. As the super-resolution reconstruction process is ill-posed, visual artifacts are often generated as a result. These artifacts can be visually distracting to humans and/or affect machine recognition algorithms. While it is intuitive that higher resolution should lead to improved recognition accuracy, the effects of super-resolution and such artifacts on face recognition performance have not been systematically studied. This paper aims to address this gap while illustrating that super-resolution allows more accurate identification of individuals from low-resolution surveillance footage. The proposed optical flow-based super-resolution method is benchmarked against Baker et al.’s hallucination and Schultz et al.’s super-resolution techniques on images from the Terrascope and XM2VTS databases. Ground truth and interpolated images were also tested to provide a baseline for comparison. Results show that a suitable super-resolution system can improve the discriminability of surveillance video and enhance face recognition accuracy. The experiments also show that Schultz et al.’s method fails when dealing surveillance footage due to its assumption of rigid objects in the scene. The hallucination and optical flow-based methods performed comparably, with the optical flow-based method producing less visually distracting artifacts that interfered with human recognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although transit travel time variability is essential for understanding the deterioration of reliability, optimising transit schedule and route choice; it has not attracted enough attention from the literature. This paper proposes public transport-oriented definitions of travel time variability and explores the distributions of public transport travel time using the Transit Signal Priority data. First, definitions of public transport travel time variability are established by extending the common definitions of variability in the literature and by using route and services data of public transport vehicles. Second, the paper explores the distribution of public transport travel time. A new approach for analysing the distributions involving all transit vehicles as well as vehicles from a specific route is proposed. The Lognormal distribution is revealed as the descriptors for public transport travel time from the same route and service. The methods described in this study could be of interest for both traffic managers and transit operators for planning and managing the transit systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are a number of pressing issues facing contemporary online environments that are causing disputes among participants and platform operators and increasing the likelihood of external regulation. A number of solutions have been proposed, including industry self-governance, top-down regulation and emergent self-governance such as EVE Online’s “Council of Stellar Management”. However, none of these solutions seem entirely satisfying; facing challenges from developers who fear regulators will not understand their platforms, or players who feel they are not sufficiently empowered to influence the platform, while many authors have raised concerns over the implementation of top-down regulation, and why the industry may be well-served to pre-empt such action. This paper considers case studies of EVE Online and the offshore gambling industry, and whether a version of self-governance may be suitable for the future of the industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current governance challenges facing the global games industry are heavily dominated by online games. Whilst much academic and industry attention has been afforded to Virtual Worlds, the more pressing contemporary challenges may arise in casual games, especially when found on social networks. As authorities are faced with an increasing volume of disputes between participants and platform operators, the likelihood of external regulation increases, and the role that such regulation would have on the industry – both internationally and within specific regions – is unclear. Kelly (2010) argues that “when you strip away the graphics of these [social] games, what you are left with is simply a button [...] You push it and then the game returns a value of either Win or Lose”. He notes that while “every game developer wants their game to be played, preferably addictively, because it’s so awesome”, these mechanics lead not to “addiction of engagement through awesomeness” but “the addiction of compulsiveness”, surmising that “the reality is that they’ve actually sort-of kind-of half-intentionally built a virtual slot machine industry”. If such core elements of social game design are questioned, this gives cause to question the real-money options to circumvent them. With players able to purchase virtual currency and speed the completion of tasks, the money invested by the 20% purchasing in-game benefits (Zainwinger, 2012) may well be the result of compulsion. The decision by the Japanese Consumer Affairs agency to investigate the ‘Kompu Gacha’ mechanic (in which players are rewarded for completing a set of items obtained through purchasing virtual goods such as mystery boxes), and the resultant verdict that such mechanics should be regulated through gambling legislation, demonstrates that politicians are beginning to look at the mechanics deployed in these environments. Purewal (2012) states that “there’s a reasonable argument that complete gacha would be regulated under gambling law under at least some (if not most) Western jurisdictions”. This paper explores the governance challenged within these games and platforms, their role in the global industry, and current practice amongst developers in the Australian and United States to address such challenges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much of the work currently occurring in the field of Quantum Interaction (QI) relies upon Projective Measurement. This is perhaps not optimal, cognitive states are not nearly as well behaved as standard quantum mechanical systems; they exhibit violations of repeatability, and the operators that we use to describe measurements do not appear to be naturally orthogonal in cognitive systems. Here we attempt to map the formalism of Positive Operator Valued Measure (POVM) theory into the domain of semantic memory, showing how it might be used to construct Bell-type inequalities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Cancer can be a distressing experience for cancer patients and carers, impacting on psychological, social, physical and spiritual functioning. However, health professionals often fail to detect distress in their patients due to time constraints and a lack of experience. Also, with the focus on the patient, carer needs are often overlooked. This study investigated the acceptability of brief distress screening with the Distress Thermometer (DT) and Problem List (PL) to operators of a community-based telephone helpline, as well as to cancer patients and carers calling the service. Methods Operators (n = 18) monitored usage of the DT and PL with callers (cancer patients/carers, >18 years, and English-speaking) from September-December 2006 (n = 666). The DT is a single item, 11-point scale to rate level of distress. The associated PL identifies the cause of distress. Results The DT and PL were used on 90% of eligible callers, most providing valid responses. Benefits included having an objective, structured and consistent means for distress screening and triage to supportive care services. Reported challenges included apparent inappropriateness of the tools due to the nature of the call or level of caller distress, the DT numeric scale, and the level of operator training. Conclusions We observed positive outcomes to using the DT and PL, although operators reported some challenges. Overcoming these challenges may improve distress screening particularly by less experienced clinicians, and further development of the PL items and DT scale may assist with administration. The DT and PL allow clinicians to direct/prioritise interventions or referrals, although ongoing training and support is critical in distress screening.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The provision of effective training of supervisors and operators is essential if sugar factories are to operate profitably and in an environmentally sustainable and safe manner. The benefits of having supervisor and operator staff with a high level of operational skills are reduced stoppages, increased recovery, improved sugar quality, reduced damage to equipment, and reduced OH&S and environmental impacts. Training of new operators and supervisors in factories has traditionally relied on on-the-job training of the new or inexperienced staff by experienced supervisors and operators, supplemented by courses conducted by contractors such as Sugar Research Institute (SRI). However there is clearly a need for staff to be able to undertake training at any time, drawing on the content of online courses as required. An improved methodology for the training of factory supervisors and operators has been developed by QUT on behalf of a syndicate of mills. The new methodology provides ‘at factory’ learning via self-paced modules. Importantly, the training resources for each module are designed to support the training programs within sugar factories, thereby establishing a benchmark for training across the sugar industry. The modules include notes, training guides and session plans, guidelines for walkthrough tours of the stations, learning activities, resources such as videos, animations, job aids and competency assessments. The materials are available on the web for registered users in Australian Mills and many activities are best undertaken online. Apart from a few interactive online resources, the materials for each module can also be downloaded. The acronym SOTrain (Supervisor and Operator Training) has been applied to the new training program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We generalize the classical notion of Vapnik–Chernovenkis (VC) dimension to ordinal VC-dimension, in the context of logical learning paradigms. Logical learning paradigms encompass the numerical learning paradigms commonly studied in Inductive Inference. A logical learning paradigm is defined as a set W of structures over some vocabulary, and a set D of first-order formulas that represent data. The sets of models of ϕ in W, where ϕ varies over D, generate a natural topology W over W. We show that if D is closed under boolean operators, then the notion of ordinal VC-dimension offers a perfect characterization for the problem of predicting the truth of the members of D in a member of W, with an ordinal bound on the number of mistakes. This shows that the notion of VC-dimension has a natural interpretation in Inductive Inference, when cast into a logical setting. We also study the relationships between predictive complexity, selective complexity—a variation on predictive complexity—and mind change complexity. The assumptions that D is closed under boolean operators and that W is compact often play a crucial role to establish connections between these concepts. We then consider a computable setting with effective versions of the complexity measures, and show that the equivalence between ordinal VC-dimension and predictive complexity fails. More precisely, we prove that the effective ordinal VC-dimension of a paradigm can be defined when all other effective notions of complexity are undefined. On a better note, when W is compact, all effective notions of complexity are defined, though they are not related as in the noncomputable version of the framework.