205 resultados para Task Complexity
Resumo:
The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.
Resumo:
Vigilance declines when exposed to highly predictable and uneventful tasks. Monotonous tasks provide little cognitive and motor stimulation and contribute to human errors. This paper aims to model and detect vigilance decline in real time through participant’s reaction times during a monotonous task. A lab-based experiment adapting the Sustained Attention to Response Task (SART) is conducted to quantify the effect of monotony on overall performance. Then relevant parameters are used to build a model detecting hypovigilance throughout the experiment. The accuracy of different mathematical models are compared to detect in real-time – minute by minute - the lapses in vigilance during the task. We show that monotonous tasks can lead to an average decline in performance of 45%. Furthermore, vigilance modelling enables to detect vigilance decline through reaction times with an accuracy of 72% and a 29% false alarm rate. Bayesian models are identified as a better model to detect lapses in vigilance as compared to Neural Networks and Generalised Linear Mixed Models. This modelling could be used as a framework to detect vigilance decline of any human performing monotonous tasks.
Resumo:
Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.
Resumo:
Within the current climate of unpredictability and constant change, young people at school are faced with a multitude of choices and contradictory influences. In this article, I argue that (re)presentations of young people in youth research need to reflect the complexity and multiplicity of their lives and changing priorities, and I attempt to (re)present a small group of young people in this particular milieu. I illustrate some of the competing influences in their lives, and I outline some specific strategies that are useful for (re)presenting these contextual worlds. The strategies I advocate disrupt the homogenous representations of ‘youth’ as a developmental phase and instead reflect the diverse spheres of influence which shape their subjectivities and practices.
Resumo:
Objective Research is beginning to provide an indication of the co-occurring substance abuse and mental health needs for the driving under the influence (DUI) population. This study aimed to examine the extent of such psychiatric problems among a large sample size of DUI offenders entering treatment in Texas. Methods This is a study of 36,373 past year DUI clients and 308,714 non-past year DUI clients admitted to Texas treatment programs between 2005 and 2008. Data were obtained from the State's administrative dataset. Results Analysis indicated that non-past year DUI clients were more likely to present with more severe illicit substance use problems, while past year DUI clients were more likely to have a primary problem with alcohol. Nevertheless, a cannabis use problem was also found to be significantly associated with DUI recidivism in the last year. In regards to mental health status, a major finding was that depression was the most common psychiatric condition reported by DUI clients, including those with more than one DUI offence in the past year. This cohort also reported elevated levels of Bipolar Disorder compared to the general population, and such a diagnosis was also associated with an increased likelihood of not completing treatment. Additionally, female clients were more likely to be diagnosed with mental health problems than males, as well as more likely to be placed on medications at admission and more likely to have problems with methamphetamine, cocaine, and opiates. Conclusions DUI offenders are at an increased risk of experiencing comorbid psychiatric disorders, and thus, corresponding treatment programs need to cater for a range of mental health concerns that are likely to affect recidivism rates.
Resumo:
The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create hypovigilance and impair performance towards critical events. Identifying such impairment in monotonous conditions has been a major subject of research, but no research to date has attempted to predict it in real-time. This pilot study aims to show that performance decrements due to monotonous tasks can be predicted through mathematical modelling taking into account sensation seeking levels. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants‟ performance. The framework for prediction developed on this task could be extended to a monotonous driving task. A Hidden Markov Model (HMM) is proposed to predict participants‟ lapses in alertness. Driver‟s vigilance evolution is modelled as a hidden state and is correlated to a surrogate measure: the participant‟s reactions time. This experiment shows that the monotony of the task can lead to an important decline in performance in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.
Resumo:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
Resumo:
Engineering graduates of today are required to adapt to a rapidly changing work environment. In particular, they are expected to demonstrate enhanced capabilities in both mono-disciplinary and multi-disciplinary teamwork environments. Engineering education needs, as a result, to further focus on developing group work capabilities amongst engineering graduates. Over the last two years, the authors trialed various group work strategies across two engineering disciplines. In particular, the effect of group formation on students' performance, task management, and social loafing was analyzed. A recently developed online teamwork management tool, Teamworker, was used to collect students' experience of the group work. Analysis showed that students who were allowed to freely allocate to any group were less likely to report loafing from other team members, than students who were pre-allocated to a group. It also showed that performance was more affected by the presence or absence of a leader in pre-allocated rather than free-allocated groups.
Resumo:
After a brief personal orientation, this presentation offers an opening section on „clash, cluster, complexity, cities‟ – making the case that innovation (both creative and economic) proceeds not only from incremental improvements within an expert-pipeline process, but also from the clash of different systems, generations, and cultures. The argument is that cultural complexity arises from such clashes, and that clustering is the solution to problems of complexity. The classic, 10,000-year-old, institutional form taken by such clusters is … cities. Hence, a creative city is one where clashing and competitive complexity is clustered… and, latterly, networked.
Resumo:
We have recently demonstrated the geographic isolation of rice tungro bacilliform virus (RTBV) populations in the tungro-endemic provinces of Isabela and North Cotabato, Philippines. In this study, we examined the genetic structure of the virus populations at the tungro-outbreak sites of Lanao del Norte, a province adjacent to North Cotabato. We also analyzed the virus populations at the tungro-endemic sites of Subang, Indonesia, and Dien Khanh, Vietnam. Total DNA extracts from 274 isolates were digested with EcoRV restriction enzyme and hybridized with a full-length probe of RTBV. In the total population, 22 EcoRV-restricted genome profiles (genotypes) were identified. Although overlapping genotypes could be observed, the outbreak sites of Lanao del Norte had a genotype combination distinct from that of Subang or Dien Khanh but a genotype combination similar to that identified earlier from North Cotabato, the adjacent endemic province. Sequence analysis of the intergenic region and part of the ORF1 RTBV genome from randomly selected genotypes confirms the geographic clustering of RTBV genotypes and, combined with restriction analysis, the results suggest a fragmented spatial distribution of RTBV local populations in the three countries. Because RTBV depends on rice tungro spherical virus (RTSV) for transmission, the population dynamics of both tungro viruses were then examined at the endemic and outbreak sites within the Philippines. The RTBV genotypes and the coat protein RTSV genotypes were used as indicators for virus diversity. A shift in population structure of both viruses was observed at the outbreak sites with a reduced RTBV but increased RTSV gene diversity
Resumo:
While Business Process Management (BPM) is an established discipline, the increased adoption of BPM technology in recent years has introduced new challenges. One challenge concerns dealing with the ever-growing complexity of business process models. Mechanisms for dealing with this complexity can be classified into two categories: 1) those that are solely concerned with the visual representation of the model and 2) those that change its inner structure. While significant attention is paid to the latter category in the BPM literature, this paper focuses on the former category. It presents a collection of patterns that generalize and conceptualize various existing mechanisms to change the visual representation of a process model. Next, it provides a detailed analysis of the degree of support for these patterns in a number of state-of-the-art languages and tools. This paper concludes with the results of a usability evaluation of the patterns conducted with BPM practitioners.
Resumo:
This paper analyzes effects of different practice task constraints on heart rate (HR) variability during 4v4 smallsided football games. Participants were sixteen football players divided into two age groups (U13, Mean age: 12.4±0.5 yrs; U15: 14.6±0.5). The task consisted of a 4v4 sub-phase without goalkeepers, on a 25x15 m field, of 15 minutes duration with an active recovery period of 6 minutes between each condition. We recorded players’ heart rates using heart rate monitors (Polar Team System, Polar Electro, Kempele, Finland) as scoring mode was manipulated (line goal: scoring by dribbling past an extended line; double goal: scoring in either of two lateral goals; and central goal: scoring only in one goal). Subsequently, %HR reserve was calculated with the Karvonen formula. We performed a time-series analysis of HR for each individual in each condition. Mean data for intra-participant variability showed that autocorrelation function was associated with more short-range dependence processes in the “line goal” condition, compared to other conditions, demonstrating that the “line goal” constraint induced more randomness in HR response. Relative to inter-individual variability, line goal constraints demonstrated lower %CV and %RMSD (U13: 9% and 19%; U15: 10% and 19%) compared with double goal (U13: 12% and 21%; U15: 12% and 21%) and central goal (U13: 14% and 24%; U15: 13% and 24%) task constraints, respectively. Results suggested that line goal constraints imposed more randomness on cardiovascular stimulation of each individual and lower inter-individual variability than double goal and central goal constraints.
Resumo:
Gaze and movement behaviors of association football goalkeepers were compared under two video simulation conditions (i.e., verbal and joystick movement responses) and three in situ conditions (i.e., verbal, simplified body movement, and interceptive response). The results showed that the goalkeepers spent more time fixating on information from the penalty kick taker’s movements than ball location for all perceptual judgment conditions involving limited movement (i.e., verbal responses, joystick movement, and simplified body movement). In contrast, an equivalent amount of time was spent fixating on the penalty taker’s relative motions and the ball location for the in situ interception condition, which required the goalkeepers to attempt to make penalty saves. The data suggest that gaze and movement behaviors function differently, depending on the experimental task constraints selected for empirical investigations. These findings highlight the need for research on perceptual— motor behaviors to be conducted in representative experimental conditions to allow appropriate generalization of conclusions to performance environments.
Resumo:
The structure and dynamics of a modern business environment are very hard to model using traditional methods. Such complexity raises challenges to effective business analysis and improvement. The importance of applying business process simulation to analyze and improve business activities has been widely recognized. However, one remaining challenge is the development of approaches to human resource behavior simulation. To address this problem, we describe a novel simulation approach where intelligent agents are used to simulate human resources by performing allocated work from a workflow management system. The behavior of the intelligent agents is driven a by state transition mechanism called a Hierarchical Task Network (HTN). We demonstrate and validate our simulator via a medical treatment process case study. Analysis of the simulation results shows that the behavior driven by the HTN is consistent with design of the workflow model. We believe these preliminary results support the development of more sophisticated agent-based human resource simulation systems.