994 resultados para Task Failure
Resumo:
This thesis attempts to find whether scenario planning supports the organizational strategy as a method for addressing uncertainty. The main issues are why, what and how scenario planning fits in organizational strategy and how the process could be supported to make it more effective. The study follows the constructive approach. It starts with examination of competitive advantage and the way that an organization develops strategy and how it addresses the uncertainty in its operational environment. Based on the conducted literature review, scenario methods would seem to provide versatile platform for addressing future uncertainties. The construction is formed by examining the scenario methods and presenting suitable support methods, which results in forming of the theoretical proposition for supporter scenario process. The theoretical framework is tested in laboratory conditions, and the results from the test sessions are used a basis for scenario stories. The process of forming the scenarios and the results are illustrated and presented for scrutiny
Resumo:
Myotonic dystrophy (DM1) is a multisystemic disease caused by an expansion of CTG repeats in the region of DMPK, the gene encoding DM protein kinase. The severity of muscle disability in DM1 correlates with the size of CTG expansion. As respiratory failure is one of the main causes of death in DM1, we investigated the correlation between respiratory impairment and size of the (CTG)n repeat in DM1 animal models. Using pressure plethysmography the respiratory function was assessed in control and transgenic mice carrying either 600 (DM600) or >1300 CTG repeats (DMSXL). The statistical analysis of respiratory parameters revealed that both DM1 transgenic mice sub-lines show respiratory impairment compared to control mice. In addition, there is no significant difference in breathing functions between the DM600 and DMSXL mice. In conclusion, these results indicate that respiratory impairment is present in both transgenic mice sub-lines, but the severity of respiratory failure is not related to the size of the (CTG)n expansion.
Resumo:
Qualitative differences in strategy selection during foraging in a partially baited maze were assessed in young and old rats. The baited and non-baited arms were at a fixed position in space and marked by a specific olfactory cue. The senescent rats did more re-entries during the first four-trial block but were more rapid than the young rats in selecting the reinforced arms during the first visits. Dissociation between the olfactory spatial cue reference by rotating the maze revealed that only few old subjects relied on olfactory cues to select the baited arms and the remainder relied mainly on the visuo-spatial cues.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
BACKGROUND: Recurrent hepatitis C virus infection after liver transplantation is associated with reduced graft and patient survival. Re-transplantation for graft failure due to recurrent hepatitis C is controversial and not performed in all centers. CASE PRESENTATION: We describe a 54-year-old patient with hepatitis C virus genotype 1b infection and a null response to pegylated interferon-α and ribavirin who developed decompensated graft cirrhosis 6 years after a first liver transplantation. Treatment with sofosbuvir and ribavirin allowed for rapid negativation of serum HCV RNA and was well tolerated despite advanced liver and moderate renal dysfunction. Therapeutic drug monitoring did not reveal any clinically significant drug-drug interactions. Despite virological response, the patient remained severely decompensated and re-transplantation was performed after 46 days of undetectable serum HCV RNA. The patient is doing well 12 months after his second liver transplantation and remains free of hepatitis C virus. CONCLUSIONS: The use of directly acting antivirals may allow for successful liver re-transplantation for recipients who remain decompensated despite virological response and is likely to improve the outcome of liver re-transplantation for end-stage recurrent hepatitis C.
Resumo:
Software projects have proved to be troublesome to be implemented and as the size of software keeps increasing it is more and more important to follow-up the projects. The proportion of succeeded software projects is still quite low in spite of the research and the development of the project control methodologies. The success and failure factors of projects are known, as well as the project risks but nevertheless the projects still have problems with keeping the schedule and the budget and achieving the defined functionality and adequate quality. The purpose of this thesis was to find out what deviations are there in projects at the moment, what causes them, and what is measured in projects. Also project deviation was defined in the viewpoint of literature and field experts. The analysis was made using a qualitative research approach. It was found out that there are still deviations in software projects with schedule, budget, quality, requirements, documenting, effort, and resources. In addition also changes in requirements were identified. It was also found out that for example schedule deviations can be affected by reducing the size of a task and adding measurements.
Resumo:
A variation of task analysis was used to build an empirical model of how therapists may facilitate client assimilation process, described in the Assimilation of Problematic Experiences Scale. A rational model was specified and considered in light of an analysis of therapist in-session performances (N = 117) drawn from six inpatient therapies for depression. The therapist interventions were measured by the Comprehensive Psychotherapeutic Interventions Rating Scale. Consistent with the rational model, confronting interventions were particularly useful in helping clients elaborate insight. However, rather than there being a small number of progress-related interventions at lower levels of assimilation, therapists' use of interventions was broader than hypothesized and drew from a wide range of therapeutic approaches. Concerning the higher levels of assimilation, there was insufficient data to allow an analysis of the therapist's progress-related interventions.
Resumo:
Activity decreases, or deactivations, of midline and parietal cortical brain regions are routinely observed in human functional neuroimaging studies that compare periods of task-based cognitive performance with passive states, such as rest. It is now widely held that such task-induced deactivations index a highly organized"default-mode network" (DMN): a large-scale brain system whose discovery has had broad implications in the study of human brain function and behavior. In this work, we show that common task-induced deactivations from rest also occur outside of the DMN as a function of increased task demand. Fifty healthy adult subjects performed two distinct functional magnetic resonance imaging tasks that were designed to reliably map deactivations from a resting baseline. As primary findings, increases in task demand consistently modulated the regional anatomy of DMN deactivation. At high levels of task demand, robust deactivation was observed in non-DMN regions, most notably, the posterior insular cortex. Deactivation of this region was directly implicated in a performance-based analysis of experienced task difficulty. Together, these findings suggest that task-induced deactivations from rest are not limited to the DMN and extend to brain regions typically associated with integrative sensory and interoceptive processes.
Resumo:
In this paper we describe a taxonomy of task demands which distinguishes between Task Complexity, Task Condition and Task Difficulty. We then describe three theoretical claims and predictions of the Cognition Hypothesis (Robinson 2001, 2003b, 2005a) concerning the effects of task complexity on: (a) language production; (b) interaction and uptake of information available in the input to tasks; and (c) individual differences-task interactions. Finally we summarize the findings of the empirical studies in this special issue which all address one or more of these predictions and point to some directions for continuing, future research into the effects of task complexity on learning and performance.
Resumo:
After incidentally learning about a hidden regularity, participants can either continue to solve the task as instructed or, alternatively, apply a shortcut. Past research suggests that the amount of conflict implied by adopting a shortcut seems to bias the decision for vs. against continuing instruction-coherent task processing. We explored whether this decision might transfer from one incidental learning task to the next. Theories that conceptualize strategy change in incidental learning as a learning-plus-decision phenomenon suggest that high demands to adhere to instruction-coherent task processing in Task 1 will impede shortcut usage in Task 2, whereas low control demands will foster it. We sequentially applied two established incidental learning tasks differing in stimuli, responses and hidden regularity (the alphabet verification task followed by the serial reaction task, SRT). While some participants experienced a complete redundancy in the task material of the alphabet verification task (low demands to adhere to instructions), for others the redundancy was only partial. Thus, shortcut application would have led to errors (high demands to follow instructions). The low control demand condition showed the strongest usage of the fixed and repeating sequence of responses in the SRT. The transfer results are in line with the learning-plus-decision view of strategy change in incidental learning, rather than with resource theories of self-control.