245 resultados para Prove
Resumo:
In 2006, the administrators of the Australian virtual reference service, AskNow, entered the Instant Messaging (IM) arena. One of the first large scale, collaborative IM services in the world, the AskNow IM trial provided a unique opportunity to prove IM virtual reference as a concept, as well as to test the technology itself. This paper will discuss the rationale and impetus for the trial, explore the successes and stumbling blocks encountered during the establishment and evolution of the trial and the service model, examine the lessons learnt throughout the trial, and conclude by discussing the way forward for IM services and virtual reference.
Resumo:
We address the problem of constructing randomized online algorithms for the Metrical Task Systems (MTS) problem on a metric δ against an oblivious adversary. Restricting our attention to the class of “work-based” algorithms, we provide a framework for designing algorithms that uses the technique of regularization. For the case when δ is a uniform metric, we exhibit two algorithms that arise from this framework, and we prove a bound on the competitive ratio of each. We show that the second of these algorithms is ln n + O(loglogn) competitive, which is the current state-of-the art for the uniform MTS problem.
Resumo:
Crowdsourcing harnesses the potential of large and open networks of people. It is a relatively new phenomenon and attracted substantial interest in practice. Related research, however, lacks a theoretical foundation. We propose a system-theoretical perspective on crowdsourcing systems to address this gap and illustrate its applicability by using it to classify crowdsourcing systems. By deriving two principal dimensions from theory, we identify four fundamental types of crowdsourcing systems that help to distinguish important features of such systems. We analyse their respective characteristics and discuss implications and requirements for various aspects related to the design of such systems. Our results demonstrate that systems theory can inform the study of crowdsourcing systems. The identified system types and the implications on their design may prove useful for researchers to frame future studies and for practitioners to identify the right crowdsourcing systems for a particular purpose.
Resumo:
How bloggers and other independent online commentators criticise, correct, and otherwise challenge conventional journalism has been known for years, but has yet to be fully accepted by journalists; hostilities between the media establishment and the new generation of citizen journalists continue to flare up from time to time. The old gatekeeping monopoly of the mass media has been challenged by the new practice of gatewatching: by individual bloggers and by communities of commentators which may not report the news first-hand, but curate and evaluate the news and other information provided by official sources, and thus provide an important service. And this now takes place ever more rapidly, almost in real time: using the latest social networks, which disseminate, share, comment, question, and debunk news reports within minutes, and using additional platforms that enable fast and effective ad hoc collaboration between users. When hundreds of volunteers can prove within a few days that a German minister has been guilty of serious plagiarism, when the world first learns of earthquakes and tsunamis via Twitter – how does journalism manage to keep up?
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.
Resumo:
This thesis presents the outcomes of a comprehensive research study undertaken to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The knowledge created is expected to contribute to a greater understanding of urban stormwater quality and thereby enhance the design of stormwater quality treatment systems. The research study was undertaken based on selected urban catchments in Gold Coast, Australia. The research methodology included field investigations, laboratory testing, computer modelling and data analysis. Both univariate and multivariate data analysis techniques were used to investigate the influence of rainfall and catchment characteristics on urban stormwater quality. The rainfall characteristics investigated included average rainfall intensity and rainfall duration whilst catchment characteristics included land use, impervious area percentage, urban form and pervious area location. The catchment scale data for the analysis was obtained from four residential catchments, including rainfall-runoff records, drainage network data, stormwater quality data and land use and land cover data. Pollutants build-up samples were collected from twelve road surfaces in residential, commercial and industrial land use areas. The relationships between rainfall characteristics, catchment characteristics and urban stormwater quality were investigated based on residential catchments and then extended to other land uses. Based on the influence rainfall characteristics exert on urban stormwater quality, rainfall events can be classified into three different types, namely, high average intensity-short duration (Type 1), high average intensity-long duration (Type 2) and low average intensity-long duration (Type 3). This provides an innovative approach to conventional modelling which does not commonly relate stormwater quality to rainfall characteristics. Additionally, it was found that the threshold intensity for pollutant wash-off from urban catchments is much less than for rural catchments. High average intensity-short duration rainfall events are cumulatively responsible for the generation of a major fraction of the annual pollutants load compared to the other rainfall event types. Additionally, rainfall events less than 1 year ARI such as 6- month ARI should be considered for treatment design as they generate a significant fraction of the annual runoff volume and by implication a significant fraction of the pollutants load. This implies that stormwater treatment designs based on larger rainfall events would not be feasible in the context of cost-effectiveness, efficiency in treatment performance and possible savings in land area needed. This also suggests that the simulation of long-term continuous rainfall events for stormwater treatment design may not be needed and that event based simulations would be adequate. The investigations into the relationship between catchment characteristics and urban stormwater quality found that other than conventional catchment characteristics such as land use and impervious area percentage, other catchment characteristics such as urban form and pervious area location also play important roles in influencing urban stormwater quality. These outcomes point to the fact that the conventional modelling approach in the design of stormwater quality treatment systems which is commonly based on land use and impervious area percentage would be inadequate. It was also noted that the small uniformly urbanised areas within a larger mixed catchment produce relatively lower variations in stormwater quality and as expected lower runoff volume with the opposite being the case for large mixed use urbanised catchments. Therefore, a decentralised approach to water quality treatment would be more effective rather than an "end-of-pipe" approach. The investigation of pollutants build-up on different land uses showed that pollutant build-up characteristics vary even within the same land use. Therefore, the conventional approach in stormwater quality modelling, which is based solely on land use, may prove to be inappropriate. Industrial land use has relatively higher variability in maximum pollutant build-up, build-up rate and particle size distribution than the other two land uses. However, commercial and residential land uses had relatively higher variations of nutrients and organic carbon build-up. Additionally, it was found that particle size distribution had a relatively higher variability for all three land uses compared to the other build-up parameters. The high variability in particle size distribution for all land uses illustrate the dissimilarities associated with the fine and coarse particle size fractions even within the same land use and hence the variations in stormwater quality in relation to pollutants adsorbing to different sizes of particles.
Resumo:
The monogeneric family Fergusoninidae consists of gall-forming flies that, together with Fergusobia (Tylenchida: Neotylenchidae) nematodes, form the only known mutualistic association between insects and nematodes. In this study, the entire 16,000 bp mitochondrial genome of Fergusonina taylori Nelson and Yeates was sequenced. The circular genome contains one encoding region including 27 genes and one non-coding A þT-rich region. The arrangement of the proteincoding, ribosomal RNA (rRNA) and transfer RNA (tRNA) genes was the same as that found in the ancestral insect. Nucleotide composition is highly A þ T biased. All of the protein initiation codons are ATN, except for nad1 which begins with TTT. All 22 tRNA anticodons of F. taylori match those observed in Drosophila yakuba, and all form the typical cloverleaf structure except for tRNA-Ser (AGN) which lacks a dihydrouridine (DHU) arm. Secondary structural features of the rRNA genes of Fergusonina are similar to those proposed for other insects, with minor modifications. The mitochondrial genome of Fergusonina presented here may prove valuable for resolving the sister group to the Fergusoninidae, and expands the available mtDNA data sources for acalyptrates overall.
Resumo:
The study shows an alternative solution to existing efforts at solving the problem of how to centrally manage and synchronise users’ Multiple Profiles (MP) across multiple discrete social networks. Most social network users hold more than one social network account and utilise them in different ways depending on the digital context (Iannella, 2009a). They may, for example, enjoy friendly chat on Facebook1, professional discussion on LinkedIn2, and health information exchange on PatientsLikeMe3 In this thesis the researcher proposes a framework for the management of a user’s multiple online social network profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be. The MPM will achieve the required profile management and synchronisation using a free, open, decentralized social networking platform (OSW) that was proposed by the Vodafone Group in 2010. The proposed MPM will enable a user to create and manage an integrated profile (IP) and share/synchronise this profile with all their social networks. The necessary protocols to support the prototype are also proposed by the researcher. The MPM protocol specification defines an Extensible Messaging and Presence Protocol (XMPP) extension for sharing vCard and social network accounts information between the MPM Server, MPM Client, and social network sites (SNSs). . Therefore many web users need to manage disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time-consuming, inefficient, and may lead to lost opportunity. The writer of this thesis adopted a research approach and a number of use cases for the implementation of the project. The use cases were created to capture the functional requirements of the MPM and to describe the interactions between users and the MPM. In the research a development process was followed in establishing the prototype and related protocols. The use cases were subsequently used to illustrate the prototype via the screenshots taken of the MPM client interfaces. The use cases also played a role in evaluating the outcomes of the research such as the framework, prototype, and the related protocols. An innovative application of this project is in the area of public health informatics. The researcher utilised the prototype to examine how the framework might benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians. This will give a more complete picture of the patient’s background than is currently available and will prove helpful in providing the right treatment. The MPM prototype and related protocols have a high application value as they can be integrated into the real OSW platform and so serve users in the modern digital world. They also provide online users with a real platform for centrally storing their complete profile data, efficiently managing their personal information, and moreover, synchronising the overall complete profile with each of their discrete profiles stored in their different social network sites.
Resumo:
The ubiquitin (Ub)-proteasome pathway is the major nonlysosomal pathway of proteolysis in human cells and accounts for the degradation of most short-lived, misfolded or damaged proteins. This pathway is important in the regulation of a number of key biological regulatory mechanisms. Proteins are usually targeted for proteasome-mediated degradation by polyubiquitinylation, the covalent addition of multiple units of the 76 amino acid protein Ub, which are ligated to 1-amino groups of lysine residues in the substrate. Polyubiquitinylated proteins are degraded by the 26S proteasome, a large, ATP-dependent multicatalytic protease complex, which also regenerates monomeric Ub. The targets of this pathway include key regulators of cell proliferation and cell death. An alternative form of the proteasome, termed the immunoproteasome, also has important functions in the generation of peptides for presentation by MHC class I molecules. In recent years there has been a great deal of interest in the possibility that proteasome inhibitors, through elevation of the levels of proteasome targets, might prove useful as a novel class of anti-cancer drugs. Here we review the progress made to date in this area and highlight the potential advantages and weaknesses of this approach.
Resumo:
In Strong v Woolworth Ltd (t/as Big W) (2012) 285 ALR 420 the appellant was injured when she fell at a shopping centre outside the respondent’s premises. The appellant was disabled, having had her right leg amputated above the knee and therefore walked with crutches. One of the crutches came into contact with a hot potato chip which was on the floor, causing the crutch to slip and the appellant to fall. The appellant sued in negligence, alleging that the respondent was in breach of its duty of care by failing to institute and maintain a cleaning system to detect spillages and foreign objects within its sidewalk sales area. The issue before the High Court was whether it could be established on the balance of probabilities as to when the hot chip had fallen onto the ground so as to prove causation in fact...
Resumo:
There are several popular soil moisture measurement methods today such as time domain reflectometry, electromagnetic (EM) wave, electrical and acoustic methods. Significant studies have been dedicated in developing method of measurements using those concepts, especially to achieve the characteristics of noninvasiveness. EM wave method provides an advantage because it is non-invasive to the soil and does not need to utilise probes to penetrate or bury in the soil. But some EM methods are also too complex, expensive, and not portable for the application of Wireless Sensor Networks; for example satellites or UAV (Unmanned Aerial Vehicle) based sensors. This research proposes a method in detecting changes in soil moisture using soil-reflected electromagnetic (SREM) wave from Wireless Sensor Networks (WSNs). Studies have shown that different levels of soil moisture will affects soil’s dielectric properties, such as relative permittivity and conductivity, and in turns change its reflection coefficients. The SREM wave method uses a transmitter adjacent to a WSNs node with purpose exclusively to transmit wireless signals that will be reflected by the soil. The strength from the reflected signal that is determined by the soil’s reflection coefficients is used to differentiate the level of soil moisture. The novel nature of this method comes from using WSNs communication signals to perform soil moisture estimation without the need of external sensors or invasive equipment. This innovative method is non-invasive, low cost and simple to set up. There are three locations at Brisbane, Australia chosen as the experiment’s location. The soil type in these locations contains 10–20% clay according to the Australian Soil Resource Information System. Six approximate levels of soil moisture (8, 10, 13, 15, 18 and 20%) are measured at each location; with each measurement consisting of 200 data. In total 3600 measurements are completed in this research, which is sufficient to achieve the research objective, assessing and proving the concept of SREM wave method. These results are compared with reference data from similar soil type to prove the concept. A fourth degree polynomial analysis is used to generate an equation to estimate soil moisture from received signal strength as recorded by using the SREM wave method.
Resumo:
In recent years, it has been found that many phenomena in engineering, physics, chemistry and other sciences can be described very successfully by models using mathematical tools from fractional calculus. Recently, noted a new space and time fractional Bloch-Torrey equation (ST-FBTE) has been proposed (see Magin et al. (2008)), and successfully applied to analyse diffusion images of human brain tissues to provide new insights for further investigations of tissue structures. In this paper, we consider the ST-FBTE on a finite domain. The time and space derivatives in the ST-FBTE are replaced by the Caputo and the sequential Riesz fractional derivatives, respectively. Firstly, we propose a new effective implicit numerical method (INM) for the STFBTE whereby we discretize the Riesz fractional derivative using a fractional centered difference. Secondly, we prove that the implicit numerical method for the ST-FBTE is unconditionally stable and convergent, and the order of convergence of the implicit numerical method is ( T2 - α + h2 x + h2 y + h2 z ). Finally, some numerical results are presented to support our theoretical analysis.
Resumo:
The cable equation is one of the most fundamental equations for modeling neuronal dynamics. Cable equations with a fractional order temporal derivative have been introduced to model electrotonic properties of spiny neuronal dendrites. In this paper, the fractional cable equation involving two integro-differential operators is considered. The Galerkin finite element approximations of the fractional cable equation are proposed. The main contribution of this work is outlined as follow: • A semi-discrete finite difference approximation in time is proposed. We prove that the scheme is unconditionally stable, and the numerical solution converges to the exact solution with order O(Δt). • A semi-discrete difference scheme for improving the order of convergence for solving the fractional cable equation is proposed, and the numerical solution converges to the exact solution with order O((Δt)2). • Based on the above semi-discrete difference approximations, Galerkin finite element approximations in space for a full discretization are also investigated. • Finally, some numerical results are given to demonstrate the theoretical analysis.
Resumo:
Nitrate reduction with nanoscale zero-valent iron (NZVI) was reported as a potential technology to remove nitrate from nitrate-contaminated water. In this paper, nitrate reduction with NZVI prepared by hydrogen reduction of natural goethite (NZVI-N, -N represents natural goethite) and hydrothermal goethite (NZVI-H, -H represents hydrothermal goethite) was conducted. Besides, the effects of reaction time, nitrate concentration, iron-to-nitrate ratio on nitrate removal rate over NZVI-H and NZVI-N were investigated. To prove their excellent nitrate reduction capacities, NZVI-N and NZVI-H were compared with ordinary zero-valent iron (OZVI-N) through the static experiments. Based on all above investigations, the mechanism of nitrate reduction with NZVI-N was proposed. The result showed that reaction time, nitrate concentration, iron-to-nitrate ratio played an important role in nitrate reduction by NZVI-N and NZVI-H. Compared with OZVI, NZVI-N and NZVI-H showed little relationship with pH. And NZVI-N for nitrate composition offers a higher stability than NZVI-H because of the existence of Al-substitution. Furthermore, NZVI-N, prepared by hydrogen reduction of goethite, has higher activity for nitrate reduction and the products contain hydrogen, nitrogen, NH 4 +, a little nitrite, but no NOx, meanwhile NZVI-N was oxidized to Fe 2+. It is a relatively easy and cost-effective method for nitrate removal, so NZVI-N reducing nitrate has a great potential application in nitrate removal of groundwater. © 2012 Elsevier B.V.
Resumo:
Client puzzles are cryptographic problems that are neither easy nor hard to solve. Most puzzles are based on either number theoretic or hash inversions problems. Hash-based puzzles are very efficient but so far have been shown secure only in the random oracle model; number theoretic puzzles, while secure in the standard model, tend to be inefficient. In this paper, we solve the problem of constucting cryptographic puzzles that are secure int he standard model and are very efficient. We present an efficient number theoretic puzzle that satisfies the puzzle security definition of Chen et al. (ASIACRYPT 2009). To prove the security of our puzzle, we introduce a new variant of the interval discrete logarithm assumption which may be of independent interest, and show this new problem to be hard under reasonable assumptions. Our experimental results show that, for 512-bit modulus, the solution verification time of our proposed puzzle can be up to 50x and 89x faster than the Karame-Capkum puzzle and the Rivest et al.'s time-lock puzzle respectively. In particular, the solution verification tiem of our puzzle is only 1.4x slower than that of Chen et al.'s efficient hash based puzzle.