868 resultados para 230106 Real and Complex Functions
Resumo:
This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.
Resumo:
The Roma population has become a policy issue highly debated in the European Union (EU). The EU acknowledges that this ethnic minority faces extreme poverty and complex social and economic problems. 52% of the Roma population live in extreme poverty, 75% in poverty (Soros Foundation, 2007, p. 8), with a life expectancy at birth of about ten years less than the majority population. As a result, Romania has received a great deal of policy attention and EU funding, being eligible for 19.7 billion Euros from the EU for 2007-2013. Yet progress is slow; it is debated whether Romania's government and companies were capable to use these funds (EurActiv.ro, 2012). Analysing three case studies, this research looks at policy implementation in relation to the role of Roma networks in different geographical regions of Romania. It gives insights about how to get things done in complex settings and it explains responses to the Roma problem as a „wicked‟ policy issue. This longitudinal research was conducted between 2008 and 2011, comprising 86 semi-structured interviews, 15 observations, and documentary sources and using a purposive sample focused on institutions responsible for implementing social policies for Roma: Public Health Departments, School Inspectorates, City Halls, Prefectures, and NGOs. Respondents included: governmental workers, academics, Roma school mediators, Roma health mediators, Roma experts, Roma Councillors, NGOs workers, and Roma service users. By triangulating the data collected with various methods and applied to various categories of respondents, a comprehensive and precise representation of Roma network practices was created. The provisions of the 2001 „Governmental Strategy to Improve the Situation of the Roma Population‟ facilitated forming a Roma network by introducing special jobs in local and central administration. In different counties, resources, people, their skills, and practices varied. As opposed to the communist period, a new Roma elite emerged: social entrepreneurs set the pace of change by creating either closed cliques or open alliances and by using more or less transparent practices. This research deploys the concept of social/institutional entrepreneurs to analyse how key actors influence clique and alliance formation and functioning. Significantly, by contrasting three case studies, it shows that both closed cliques and open alliances help to achieve public policy network objectives, but that closed cliques can also lead to failure to improve the health and education of Roma people in a certain region.
Resumo:
The thesis contributes to the evolving process of moving the study of Complexity from the arena of metaphor to something real and operational. Acknowledging this phenomenon ultimately changes the underlying assumptions made about working environments and leadership; organisations are dynamic and so should their leaders be. Dynamic leaders are behaviourally complex. Behavioural Complexity is a product of behavioural repertoire - range of behaviours; and behavioural differentiation - where effective leaders apply appropriate behaviour to the demands of the situation. Behavioural Complexity was operationalised using the Competing Values Framework (CVF). The CVF is a measure that captures the extent to which leaders demonstrate four behaviours on four quadrants: Control, Compete, Collaborate and Create, which are argued to be critical to all types of organisational leadership. The results provide evidence to suggest Behavioural Complexity is an enabler of leadership effectiveness; Organisational Complexity (captured using a new measure developed in the thesis) moderates Behavioural Complexity and leadership effectiveness; and leadership training supports Behavioural Complexity in contributing to leadership effectiveness. Most definitions of leadership come down to changing people’s behaviour. Such definitions have contributed to a popularity of focus in leadership research intent on exploring how to elicit change in others when maybe some of the popularity of attention should have been on eliciting change in the leader them self. It is hoped that this research will provoke interest into the factors that cause behavioural change in leaders that in turn enable leadership effectiveness and in doing so contribute to a better understanding of leadership in organisations.
Resumo:
Dementia with Lewy bodies ('Lewy body dementia' or 'diffuse Lewy body disease') (DLB) is the second most common form of dementia to affect elderly people, after Alzheimer's disease. A combination of the clinical symptoms of Alzheimer's disease and Parkinson's disease is present in DLB and the disorder is classified as a 'parkinsonian syndrome', a group of diseases which also includes Parkinson's disease, progressive supranuclear palsy, corticobasal degeneration and multiple system atrophy. Characteristics of DLB are fluctuating cognitive ability with pronounced variations in attention and alertness, recurrent visual hallucinations and spontaneous motor features, including akinesia, rigidity and tremor. In addition, DLB patients may exhibit visual signs and symptoms, including defects in eye movement, pupillary function and complex visual functions. Visual symptoms may aid the differential diagnoses of parkinsonian syndromes. Hence, the presence of visual hallucinations supports a diagnosis of Parkinson's disease or DLB rather than progressive supranuclear palsy. DLB and Parkinson's disease may exhibit similar impairments on a variety of saccadic and visual perception tasks (visual discrimination, space-motion and object-form recognition). Nevertheless, deficits in orientation, trail-making and reading the names of colours are often significantly greater in DLB than in Parkinson's disease. As primary eye-care practitioners, optometrists should be able to work with patients with DLB and their carers to manage their visual welfare.
Resumo:
This thesis begins with a review of the literature on team-based working in organisations, highlighting the variations in research findings, and the need for greater precision in our measurement of teams. It continues with an illustration of the nature and prevalence of real and pseudo team-based working, by presenting results from a large sample of secondary data from the UK National Health Service. Results demonstrate that ‘real teams’ have an important and significant impact on the reduction of many work-related safety outcomes. Based on both theoretical and methodological limitations of existing approaches, the thesis moves on to provide a clarification and extension of the ‘real team’ construct, demarcating this from other (pseudo-like) team typologies on a sliding scale, rather than a simple dichotomy. A conceptual model for defining real teams is presented, providing a theoretical basis for the development of a scale on which teams can be measured for varying extents of ‘realness’. A new twelve-item scale is developed and tested with three samples of data comprising 53 undergraduate teams, 52 postgraduate teams, and 63 public sector teams from a large UK organisation. Evidence for the content, construct and criterion-related validity of the real team scale is examined over seven separate validation studies. Theoretical, methodological and practical implications of the real team scale are then discussed.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
Metaphors have been increasingly associated with cognitive functions, which means that metaphors structure how we think and express ourselves. Metaphors are embodied in our basic physical experience, which is one reason why certain abstract concepts are expressed in more concrete terms, such as visible entities, journeys, and other types of movement, spaces etc. This communicative relevance also applies to specialised, institutionalised settings and genres, such as those produced in or related to higher education institutions, among which is spoken academic discourse. A significant research gap has been identified regarding spoken academic discourse and metaphors therein, but also given the fact that with increasing numbers of students in higher education and international research and cooperation e.g. in the form of invited lectures, spoken academic discourse can be seen as nearly omnipresent. In this context, research talks are a key research genre. A mixed methods study has been conducted, which investigates metaphors in a corpus of eight fully transcribed German and English L1 speaker conference talks and invited lectures, totalling to 440 minutes. A wide range of categories and functions were identified in the corpus. Abstract research concepts, such as results or theories are expressed in terms of concrete visual entities that can be seen or shown, but also in terms of journeys or other forms of movement. The functions of these metaphors are simplification, rhetorical emphasis, theory-construction, or pedagogic illustration. For both the speaker and the audience or discussants, anthropomorphism causes abstract and complex ideas to become concretely imaginable and at the same time more interesting because the contents of the talk appear to be livelier and hence closer to their own experience, which ensures the audience’s attention. These metaphor categories are present in both the English and the German sub corpus of this study with similar functions.
Resumo:
Assessment criteria are increasingly incorporated into teaching, making it important to clarify the pedagogic status of the qualities to which they refer. We reviewed theory and evidence about the extent to which four core criteria for student writing-critical thinking, use of language, structuring, and argument-refer to the outcomes of three types of learning: generic skills learning, a deep approach to learning, and complex learning. The analysis showed that all four of the core criteria describe to some extent properties of text resulting from using skills, but none qualify fully as descriptions of the outcomes of applying generic skills. Most also describe certain aspects of the outcomes of taking a deep approach to learning. Critical thinking and argument correspond most closely to the outcomes of complex learning. At lower levels of performance, use of language and structuring describe the outcomes of applying transferable skills. At higher levels of performance, they describe the outcomes of taking a deep approach to learning. We propose that the type of learning required to meet the core criteria is most usefully and accurately conceptualized as the learning of complex skills, and that this provides a conceptual framework for maximizing the benefits of using assessment criteria as part of teaching. © 2006 Taylor & Francis.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006
Resumo:
It is proved that if the increasing sequence {kn} n=0..∞ n=0 of nonnegative integers has density greater than 1/2 and D is an arbitrary simply connected subregion of C\R then the system of Hermite associated functions Gkn(z) n=0..∞ is complete in the space H(D) of complex functions holomorphic in D.
Resumo:
∗ The work is partially supported by NSFR Grant No MM 409/94.
Resumo:
2000 Mathematics Subject Classification: 33C60, 33C20, 44A15
Resumo:
Mathematics Subject Classification: 30B10, 30B30; 33C10, 33C20
Resumo:
The popularity of online social media platforms provides an unprecedented opportunity to study real-world complex networks of interactions. However, releasing this data to researchers and the public comes at the cost of potentially exposing private and sensitive user information. It has been shown that a naive anonymization of a network by removing the identity of the nodes is not sufficient to preserve users’ privacy. In order to deal with malicious attacks, k -anonymity solutions have been proposed to partially obfuscate topological information that can be used to infer nodes’ identity. In this paper, we study the problem of ensuring k anonymity in time-varying graphs, i.e., graphs with a structure that changes over time, and multi-layer graphs, i.e., graphs with multiple types of links. More specifically, we examine the case in which the attacker has access to the degree of the nodes. The goal is to generate a new graph where, given the degree of a node in each (temporal) layer of the graph, such a node remains indistinguishable from other k-1 nodes in the graph. In order to achieve this, we find the optimal partitioning of the graph nodes such that the cost of anonymizing the degree information within each group is minimum. We show that this reduces to a special case of a Generalized Assignment Problem, and we propose a simple yet effective algorithm to solve it. Finally, we introduce an iterated linear programming approach to enforce the realizability of the anonymized degree sequences. The efficacy of the method is assessed through an extensive set of experiments on synthetic and real-world graphs.
Resumo:
The purpose of this paper is to draw on research that discusses the relationship between interest and metacognitive functions and its effect on engaging students in the writing process. Results indicate students who are interested in their writing activities engage in metacognitive strategies, remain focused, and complete their tasks.