973 resultados para ecological process
Resumo:
This is presentation of the refereed paper accepted for the Conferences' proceedings. The presentation was given on Tuesday, 1 December 2015.
Resumo:
Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.
Resumo:
This paper addresses the following predictive business process monitoring problem: Given the execution trace of an ongoing case,and given a set of traces of historical (completed) cases, predict the most likely outcome of the ongoing case. In this context, a trace refers to a sequence of events with corresponding payloads, where a payload consists of a set of attribute-value pairs. Meanwhile, an outcome refers to a label associated to completed cases, like, for example, a label indicating that a given case completed “on time” (with respect to a given desired duration) or “late”, or a label indicating that a given case led to a customer complaint or not. The paper tackles this problem via a two-phased approach. In the first phase, prefixes of historical cases are encoded using complex symbolic sequences and clustered. In the second phase, a classifier is built for each of the clusters. To predict the outcome of an ongoing case at runtime given its (uncompleted) trace, we select the closest cluster(s) to the trace in question and apply the respective classifier(s), taking into account the Euclidean distance of the trace from the center of the clusters. We consider two families of clustering algorithms – hierarchical clustering and k-medoids – and use random forests for classification. The approach was evaluated on four real-life datasets.
Resumo:
This thesis studies how conceptual process models - that is, graphical documentations of an organisation's business processes - can enable and constrain the actions of their users. The results from case study and experiment indicate that model design decisions and people's characteristics influence how these opportunities for action are perceived and acted upon in practice.
Resumo:
This research contributes a formal framework to evaluate whether existing CMFs can model and reason about various types of normative requirements. The framework can be used to determine the level of coverage of concepts provided by CMFs, establish mappings between CMF languages and the semantics for the normative concepts and evaluate the suitability of a CMF for issuing a certification of compliance. The developed framework is independent of any specific formalism and it has been formally defined and validated through the examples of such mappings of CMFs.
Resumo:
In Smith v Lucht [2014] QDC 302 McGill DCJ considered whether in Queensland the concept of abuse of process was sufficiently broad as to encompass circumstances in which the resources of the court and the parties to be expended to determine the claim were out of all proportion to the interest at stake. Stay of proceedings - abuse of process - whether disproportionality between interest at stake and costs of litigating may amount to abuse of process - plaintiff with good cause of action entitled to pursue it.
Resumo:
The complexity, variability and vastness of the northern Australian rangelands make it difficult to assess the risks associated with climate change. In this paper we present a methodology to help industry and primary producers assess risks associated with climate change and to assess the effectiveness of adaptation options in managing those risks. Our assessment involved three steps. Initially, the impacts and adaptation responses were documented in matrices by ‘experts’ (rangeland and climate scientists). Then, a modified risk management framework was used to develop risk management matrices that identified important impacts, areas of greatest vulnerability (combination of potential impact and adaptive capacity) and priority areas for action at the industry level. The process was easy to implement and useful for arranging and analysing large amounts of information (both complex and interacting). Lastly, regional extension officers (after minimal ‘climate literacy’ training) could build on existing knowledge provided here and implement the risk management process in workshops with rangeland land managers. Their participation is likely to identify relevant and robust adaptive responses that are most likely to be included in regional and property management decisions. The process developed here for the grazing industry could be modified and used in other industries and sectors. By 2030, some areas of northern Australia will experience more droughts and lower summer rainfall. This poses a serious threat to the rangelands. Although the impacts and adaptive responses will vary between ecological and geographic systems, climate change is expected to have noticeable detrimental effects: reduced pasture growth and surface water availability; increased competition from woody vegetation; decreased production per head (beef and wool) and gross margin; and adverse impacts on biodiversity. Further research and development is needed to identify the most vulnerable regions, and to inform policy in time to facilitate transitional change and enable land managers to implement those changes.
Resumo:
This two-year study examined the impacts of feral pig diggings on five ecological indicators: seedling survival, surface litter, subsurface plant biomass, earthworm biomass and soil moisture content. Twelve recovery exclosures were established in two habitats (characterised by wet and dry soil moisture) by fencing off areas of previous pig diggings. A total of 0.59 ha was excluded from further pig diggings and compared with 1.18 ha of unfenced control areas. Overall, seedling numbers increased 7% within the protected exclosures and decreased 37% within the unprotected controls over the two-year study period. A significant temporal interaction was found in the dry habitat, with seedling survival increasing with increasing time of protection from diggings. Feral pig diggings had no significant effect on surface litter biomass, subsurface plant biomass, earthworm biomass or soil moisture content.
Resumo:
The value of CLIMEX models to inform biocontrol programs was assessed, including predicting the potential distribution of biocontrol agents and their subsequent population dynamics, using bioclimatic models for the weed Parkinsonia aculeata, two Lantana camara biocontrol agents, and five Mimosa pigra biocontrol agents. The results showed the contribution of data types to CLIMEX models and the capacity of these models to inform and improve the selection, release and post release evaluation of biocontrol agents. Foremost among these was the quality of spatial and temporal information as well as the extent to which overseas range data samples the species’ climatic envelope. Post hoc evaluation and refinement of these models requires improved long-term monitoring of introduced agents and their dynamics at well selected study sites. The authors described the findings of these case studies, highlighted their implications, and considered how to incorporate models effectively into biocontrol programs.
Resumo:
While the method using specialist herbivores in managing invasive plants (classical biological control) is regarded as relatively safe and cost-effective in comparison to other methods of management, the rarity of strict monophagy among insect herbivores illustrates that, like any management option, biological control is not risk-free. The challenge for classical biological control is therefore to predict risks and benefits a priori. In this study we develop a simulation model that may aid in this process. We use this model to predict the risks and benefits of introducing the chrysomelid beetle Charidotis auroguttata to manage the invasive liana Macfadyena unguis-cati in Australia. Preliminary host-specificity testing of this herbivore indicated that there was limited feeding on a non-target plant, although the non-target was only able to sustain some transitions of the life cycle of the herbivore. The model includes herbivore, target and non-target life history and incorporates spillover dynamics of populations of this herbivore from the target to the non-target under a variety of scenarios. Data from studies of this herbivore in the native range and under quarantine were used to parameterize the model and predict the relative risks and benefits of this herbivore when the target and non-target plants co-occur. Key model outputs include population dynamics on target (apparent benefit) and non-target (apparent risk) and fitness consequences to the target (actual benefit) and non-target plant (actual risk) of herbivore damage. The model predicted that risk to the non-target became unacceptable (i.e. significant negative effects on fitness) when the ratio of target to non-target in a given patch ranged from 1:1 to 3:2. By comparing the current known distribution of the non-target and the predicted distribution of the target we were able to identify regions in Australia where the agent may be pose an unacceptable risk. By considering risk and benefit simultaneously, we highlight how such a simulation modelling approach can assist scientists and regulators in making more objective decisions a priori, on the value of releasing specialist herbivores as biological control agents.
Resumo:
We review key issues, available approaches and analyses to encourage and assist practitioners to develop sound plans to evaluate the effectiveness of weed biological control agents at various phases throughout a program. Assessing the effectiveness of prospective agents before release assists the selection process, while post-release evaluation aims to determine the extent that agents are alleviating the ecological, social and economic impacts of the weeds. Information gathered on weed impacts prior to the initiation of a biological control program is necessary to provide baseline data and devise performance targets against which the program can subsequently be evaluated. Detailed data on weed populations, associated plant communities and, in some instances ecosystem processes collected at representative sites in the introduced range several years before the release of agents can be compared with similar data collected later to assess agent effectiveness. Laboratory, glasshouse and field studies are typically used to assess agent effectiveness. While some approaches used for field studies may be influenced by confounding factors, manipulative experiments where agents are excluded (or included) using chemicals or cages are more robust but time-consuming and expensive to implement. Demographic modeling and benefit–cost analyses are increasingly being used to complement other studies. There is an obvious need for more investment in long-term post-release evaluation of agent effectiveness to rigorously document outcomes of biological control programs.