986 resultados para Number sense
Resumo:
Dissertação de Mestrado apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Psicologia.
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Arquitetura e Urbanismo
Resumo:
Background Single nucleotide polymorphisms (SNPs) have been used extensively in genetics and epidemiology studies. Traditionally, SNPs that did not pass the Hardy-Weinberg equilibrium (HWE) test were excluded from these analyses. Many investigators have addressed possible causes for departure from HWE, including genotyping errors, population admixture and segmental duplication. Recent large-scale surveys have revealed abundant structural variations in the human genome, including copy number variations (CNVs). This suggests that a significant number of SNPs must be within these regions, which may cause deviation from HWE. Results We performed a Bayesian analysis on the potential effect of copy number variation, segmental duplication and genotyping errors on the behavior of SNPs. Our results suggest that copy number variation is a major factor of HWE violation for SNPs with a small minor allele frequency, when the sample size is large and the genotyping error rate is 0~1%. Conclusions Our study provides the posterior probability that a SNP falls in a CNV or a segmental duplication, given the observed allele frequency of the SNP, sample size and the significance level of HWE testing.
Resumo:
The best-effort nature of the Internet poses a significant obstacle to the deployment of many applications that require guaranteed bandwidth. In this paper, we present a novel approach that enables two edge/border routers-which we call Internet Traffic Managers (ITM)-to use an adaptive number of TCP connections to set up a tunnel of desirable bandwidth between them. The number of TCP connections that comprise this tunnel is elastic in the sense that it increases/decreases in tandem with competing cross traffic to maintain a target bandwidth. An origin ITM would then schedule incoming packets from an application requiring guaranteed bandwidth over that elastic tunnel. Unlike many proposed solutions that aim to deliver soft QoS guarantees, our elastic-tunnel approach does not require any support from core routers (as with IntServ and DiffServ); it is scalable in the sense that core routers do not have to maintain per-flow state (as with IntServ); and it is readily deployable within a single ISP or across multiple ISPs. To evaluate our approach, we develop a flow-level control-theoretic model to study the transient behavior of established elastic TCP-based tunnels. The model captures the effect of cross-traffic connections on our bandwidth allocation policies. Through extensive simulations, we confirm the effectiveness of our approach in providing soft bandwidth guarantees. We also outline our kernel-level ITM prototype implementation.
Resumo:
This paper proposes the use of in-network caches (which we call Angels) to reduce the Minimum Distribution Time (MDT) of a file from a seeder – a node that possesses the file – to a set of leechers – nodes who are interested in downloading the file. An Angel is not a leecher in the sense that it is not interested in receiving the entire file, but rather it is interested in minimizing the MDT to all leechers, and as such uses its storage and up/down-link capacity to cache and forward parts of the file to other peers. We extend the analytical results by Kumar and Ross [1] to account for the presence of angels by deriving a new lower bound for the MDT. We show that this newly derived lower bound is tight by proposing a distribution strategy under assumptions of a fluid model. We present a GroupTree heuristic that addresses the impracticalities of the fluid model. We evaluate our designs through simulations that show that our Group-Tree heuristic outperforms other heuristics, that it scales well with the increase of the number of leechers, and that it closely approaches the optimal theoretical bounds.
Resumo:
We propose a multi-object multi-camera framework for tracking large numbers of tightly-spaced objects that rapidly move in three dimensions. We formulate the problem of finding correspondences across multiple views as a multidimensional assignment problem and use a greedy randomized adaptive search procedure to solve this NP-hard problem efficiently. To account for occlusions, we relax the one-to-one constraint that one measurement corresponds to one object and iteratively solve the relaxed assignment problem. After correspondences are established, object trajectories are estimated by stereoscopic reconstruction using an epipolar-neighborhood search. We embedded our method into a tracker-to-tracker multi-view fusion system that not only obtains the three-dimensional trajectories of closely-moving objects but also accurately settles track uncertainties that could not be resolved from single views due to occlusion. We conducted experiments to validate our greedy assignment procedure and our technique to recover from occlusions. We successfully track hundreds of flying bats and provide an analysis of their group behavior based on 150 reconstructed 3D trajectories.
Resumo:
This thesis proposes the use of in-network caches (which we call Angels) to reduce the Minimum Distribution Time (MDT) of a file from a seeder – a node that possesses the file – to a set of leechers – nodes who are interested in downloading the file. An Angel is not a leecher in the sense that it is not interested in receiving the entire file, but rather it is interested in minimizing the MDT to all leechers, and as such uses its storage and up/down-link capacity to cache and forward parts of the file to other peers. We extend the analytical results by Kumar and Ross (Kumar and Ross, 2006) to account for the presence of angels by deriving a new lower bound for the MDT. We show that this newly derived lower bound is tight by proposing a distribution strategy under assumptions of a fluid model. We present a GroupTree heuristic that addresses the impracticalities of the fluid model. We evaluate our designs through simulations that show that our GroupTree heuristic outperforms other heuristics, that it scales well with the increase of the number of leechers, and that it closely approaches the optimal theoretical bounds.
Resumo:
In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.
Resumo:
How do our brains transform the "blooming buzzing confusion" of daily experience into a coherent sense of self that can learn and selectively attend to important information? How do local signals at multiple processing stages, none of which has a global view of brain dynamics or behavioral outcomes, trigger learning at multiple synaptic sites when appropriate, and prevent learning when inappropriate, to achieve useful behavioral goals in a continually changing world? How does the brain allow synaptic plasticity at a remarkably rapid rate, as anyone who has gone to an exciting movie is readily aware, yet also protect useful memories from catastrophic forgetting? A neural model provides a unified answer by explaining and quantitatively simulating data about single cell biophysics and neurophysiology, laminar neuroanatomy, aggregate cell recordings (current-source densities, local field potentials), large-scale oscillations (beta, gamma), and spike-timing dependent plasticity, and functionally linking them all to cognitive information processing requirements.
Resumo:
Neural network models of working memory, called Sustained Temporal Order REcurrent (STORE) models, are described. They encode the invariant temporal order of sequential events in short term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items is invariant in the sense that, relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.
Resumo:
Evaluation of temperature distribution in cold rooms is an important consideration in the design of food storage solutions. Two common approaches used in both industry and academia to address this question are the deployment of wireless sensors, and modelling with Computational Fluid Dynamics (CFD). However, for a realworld evaluation of temperature distribution in a cold room, both approaches have their limitations. For wireless sensors, it is economically unfeasible to carry out large-scale deployment (to obtain a high resolution of temperature distribution); while with CFD modelling, it is usually not accurate enough to get a reliable result. In this paper, we propose a model-based framework which combines the wireless sensors technique with CFD modelling technique together to achieve a satisfactory trade-off between minimum number of wireless sensors and the accuracy of temperature profile in cold rooms. A case study is presented to demonstrate the usability of the framework.
Resumo:
Stress can be understood in terms of the meaning of stressful experiences for individuals. The meaning of stressful experiences involves threats to self-adequacy, where self-adequacy is considered a basic human need. Appropriate research methods are required to explore this aspect of stress. The present study is a qualitative exploration of the stress experienced by a group of 27 students at the National Institute of Higher Education, Limerick (since renamed the University of Limerick). The study was carried out by the resident student counsellor at the college. A model of student stress was explored, based on student developmental needs. The data consist of a series of interviews recorded with each of the 27 students over a 3 month period. These interviews were transcribed and the resulting transcripts are the subject of detailed analysis. The analysis of the data is an account of the sense-making process by the student counsellor of the students' reported experiences. The aim of the analysis was to reduce the large amounts of data to their most salient aspects in an ordered fashion, so as to examine the application of a developmental model of stress with this group of students. There were two key elements to the analysis. First, the raw data were edited to identify the key statements contained in the interviews. Second, the statements were categorised, as a means of summarising the data. The results of the qualitative dataanalysis were then applied to the developmental model. The analysis of data revealed a number of patterns of stress amongst the sample of students. Patterns of academic over-identification, parental conflict and social inadequacy were particularly noteworthy. These patterns consisted of an integration of academic, family and social stresses within a developmental framework. Gender differences with regard to the need for separateness and belonging are highlighted. Appropriate student stress intervention strategies are discussed. Based on the present results, the relationship between stress and development has been highlighted and is recommended as a firm basis for future studies of stress in general and student stress in particular.
Resumo:
This research is focused on Community Workers located in Southern Ireland, and their understandings and practices of resistance. It is an attempt to explore the ways in which community workers’ understandings and practices of resistance are formed and, in turn, inform their sense of identity and their responses to the wider context of community development work in Ireland today. This study is specifically located but also has wider application and relevance because of the extended international reach of neo-liberal and managerial rationalities, and their implications for politics, policy and practice. The study considers resistance in a number of inter-related ways: as a collective oppositional position (with negative and positive dimensions); a personal and/or professional value (associated with the ‘expansion of contention’); a strategy for negotiating unequal power relations (in a range of levels and spaces of power); an identity (in relation to the sustaining of ‘reflexive subjectivities’); a set of practices, (which take into account the interplay between economic, political and cultural influences); and an educational process through which practitioners assess and enact personal and professional agency. Critical theorisations of community development and of the Irish state over time, trace the ways in which neo-liberalism and managerialism has inflected community development practice and the positions of community workers and communities in that process. The study draws on James C. Scott, Gramsci, Barnes and Prior, among others, which enabled the interrogation of resistance in relation to everyday practices through engaging with ‘hidden transcripts’ and spaces. The method chosen was focus group discussions with three groups of community workers located in different counties in Southern Ireland. This method facilitated a deep discourse analysis of practitioners’ encounters with resistance in the field of community work. Key findings relate to the various interpretations of the role of resistance, practices of resistance (including current restrictions), the value of resistance work and the conditions that may be conducive to practising resistance.
Resumo:
This qualitative research expands understanding of how information about a range of Novel Food Technologies (NFTs) is used and assimilated, and the implications of this on the evolution of attitudes and acceptance. This work enhances theoretical and applied understanding of citizens’ evaluative processes around these technologies. The approach applied involved observations of interactive exchanges between citizens and information providers (i.e. food scientists), during which they discussed a specific technology. This flexible, yet structured, approach revealed how individuals construct meaning around information about specific NFTs. A rich dataset of 42 ‘deliberate discourse’ and 42 postdiscourse transcripts was collected. Data analysis encompassed three stages: an initial descriptive account of the complete dataset based on the top-down bottom-up (TDBU) model of attitude formation, followed by inductive and deductive thematic analysis across the selected technology groups. The hybrid thematic analysis undertaken identified a Conceptual Model, which represents a holistic perspective on the influences and associated features directing ‘sense-making’ and ultimate evaluations around the technology clusters. How individuals make sense of these technologies is shaped by: their beliefs, values and personal characteristics; their perceptions of power and control over the application of the technology; and, the assumed relevance of the technology and its applications within different contexts. These influences form the frame for the creation of sense-making around the technologies. Internal negotiations between these influences are evident and evaluations are based on the relative importance of each influence to the individual, which tend to contribute to attitude ambivalence and instability. The findings indicate the processes of forming and changing attitudes towards these technologies are: complex; dependent on characteristics of the individual, technology, application and product; and, impacted by the nature and forms of information provided. Challenges are faced in engaging with the public about these technologies, as levels of knowledge, understanding and interest vary.