944 resultados para data gathering algorithm


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research examines three aspects of becoming a teacher, teacher identity formation in mathematics teacher education: the cognitive and affective aspect, the image of an ideal teacher directing the developmental process, and as an on-going process. The formation of emerging teacher identity was approached in a social psychological framework, in which individual development takes place in social interaction with the context through various experiences. Formation of teacher identity is seen as a dynamic, on-going developmental process, in which an individual intentionally aspires after the ideal image of being a teacher by developing his/her own competence as a teacher. The starting-point was that it is possible to examine formation of teacher identity through conceptualisation of observations that the individual and others have about teacher identity in different situations. The research uses the qualitative case study approach to formation of emerging teacher identity, the individual developmental process and the socially constructed image of an ideal mathematics teacher. Two student cases, John and Mary, and the collective case of teacher educators representing socially shared views of becoming and being a mathematics teacher are presented. The development of each student was examined based on three semi-structured interviews supplemented with written products. The data-gathering took place during the 2005 2006 academic year. The collective case about the ideal image provided during the programme was composed of separate case displays of each teacher educator, which were mainly based on semi-structured interviews in spring term 2006. The intentions and aims set for students were of special interest in the interviews with teacher educators. The interview data was analysed following the modified idea of analytic induction. The formation of teacher identity is elaborated through three themes emerging from theoretical considerations and the cases. First, the profile of one s present state as a teacher may be scrutinised through separate affective and cognitive aspects associated with the teaching profession. The differences between individuals arise through dif-ferent emphasis on these aspects. Similarly, the socially constructed image of an ideal teacher may be profiled through a combination of aspects associated with the teaching profession. Second, the ideal image directing the individual developmental process is the level at which individual and social processes meet. Third, formation of teacher identity is about becoming a teacher both in the eyes of the individual self as well as of others in the context. It is a challenge in academic mathematics teacher education to support the various cognitive and affective aspects associated with being a teacher in a way that being a professional and further development could have a coherent starting-point that an individual can internalise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the asymmetric distributed source coding problem, where the recipient interactively communicates with N correlated informants to gather their data. We are mainly interested in minimizing the worst-case number of informant bits required for successful data-gathering at recipient, but we are also concerned with minimizing the number of rounds as well as the number of recipient bits. We provide two algorithms, one that optimally minimizes the number of informant bits and other that trades-off the number of informant bits to efficiently reduce the number of rounds and number of recipient bits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a single-hop data-gathering sensor network, consisting of a set of sensor nodes that transmit data periodically to a base-station. We are interested in maximizing the lifetime of this network. With our definition of network lifetime and the assumption that the radio transmission energy consumption forms the most significant portion of the total energy consumption at a sensor node, we attempt to enhance the network lifetime by reducing the transmission energy budget of sensor nodes by exploiting three system-level opportunities. We pose the problem of maximizing lifetime as a max-min optimization problem subject to the constraint of successful data collection and limited energy supply at each node. This turns out to be an extremely difficult optimization to solve. To reduce the complexity of this problem, we allow the sensor nodes and the base-station to interactively communicate with each other and employ instantaneous decoding at the base-station. The chief contribution of the paper is to show that the computational complexity of our problem is determined by the complex interplay of various system-level opportunities and challenges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We are concerned with maximizing the lifetime of a data-gathering wireless sensor network consisting of set of nodes directly communicating with a base-station. We model this scenario as the m-message interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). With this framework, we show that m-message interactive communication can indeed enhance network lifetime. Both worst-case and average-case performances are considered.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A popular dynamic imaging technique, k-t BLAST (ktB) is studied here for BAR imaging. ktB utilizes correlations in k-space and time, to reconstruct the image time series with only a fraction of the data. The algorithm works by unwrapping the aliased Fourier conjugate space of k-t (y-f-space). The unwrapping process utilizes the estimate of the true y-f-space, by acquiring densely sampled low k-space data. The drawbacks of this method include separate training scan, blurred training estimates and aliased phase maps. The proposed changes are incorporation of phase information from the training map and using generalized-series-extrapolated training map. The proposed technique is compared with ktB on real fMRI data. The proposed changes allow for ktB to operate at an acceleration factor of 6. Performance is evaluated by comparing activation maps obtained using reconstructed images. An improvement of up to 10 dB is observed in thePSNR of activation maps. Besides, a 10% reduction in RMSE is obtained over the entire time series of fMRI images. Peak improvement of the proposed method over ktB is 35%, averaged over five data sets. (C)2010 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report derives from the EU funded research project “Key Factors Influencing Economic Relationships and Communication in European Food Chains” (FOODCOMM). The research consortium consisted of the following organisations: University of Bonn (UNI BONN), Department of Agricultural and Food Marketing Research (overall project co-ordination); Institute of Agricultural Development in Central and Eastern Europe (IAMO), Department for Agricultural Markets, Marketing and World Agricultural Trade, Halle (Saale), Germany; University of Helsinki, Ruralia Institute Seinäjoki Unit, Finland; Scottish Agricultural College (SAC), Food Marketing Research Team - Land Economy Research Group, Edinburgh and Aberdeen; Ashtown Food Research Centre (AFRC), Teagasc, Food Marketing Unit, Dublin; Institute of Agricultural & Food Economics (IAFE), Department of Market Analysis and Food Processing, Warsaw and Government of Aragon, Center for Agro-Food Research and Technology (CITA), Zaragoza, Spain. The aim of the FOODCOMM project was to examine the role (prevalence, necessity and significance) of economic relationships in selected European food chains and to identify the economic, social and cultural factors which influence co-ordination within these chains. The research project considered meat and cereal commodities in six different European countries (Finland, Germany, Ireland, Poland, Spain, UK/Scotland) and was commissioned against a background of changing European food markets. The research project as a whole consisted of seven different work packages. This report presents the results of qualitative research conducted for work package 5 (WP5) in the pig meat and rye bread chains in Finland. Ruralia Institute would like to give special thanks for all the individuals and companies that kindly gave up their time to take part in the study. Their input has been invaluable to the project. The contribution of research assistant Sanna-Helena Rantala was significant in the data gathering. FOODCOMM project was coordinated by the University of Bonn, Department of Agricultural and Food Market Research. Special thanks especially to Professor Monika Hartmann for acting as the project leader of FOODCOMM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present a wavelet - based approach to solve the non-linear perturbation equation encountered in optical tomography. A particularly suitable data gathering geometry is used to gather a data set consisting of differential changes in intensity owing to the presence of the inhomogeneous regions. With this scheme, the unknown image, the data, as well as the weight matrix are all represented by wavelet expansions, thus yielding the representation of the original non - linear perturbation equation in the wavelet domain. The advantage in use of the non-linear perturbation equation is that there is no need to recompute the derivatives during the entire reconstruction process. Once the derivatives are computed, they are transformed into the wavelet domain. The purpose of going to the wavelet domain, is that, it has an inherent localization and de-noising property. The use of approximation coefficients, without the detail coefficients, is ideally suited for diffuse optical tomographic reconstructions, as the diffusion equation removes most of the high frequency information and the reconstruction appears low-pass filtered. We demonstrate through numerical simulations, that through solving merely the approximation coefficients one can reconstruct an image which has the same information content as the reconstruction from a non-waveletized procedure. In addition we demonstrate a better noise tolerance and much reduced computation time for reconstructions from this approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We discuss the key issues in the deployment of sparse sensor networks. The network monitors several environment parameters and is deployed in a semi-arid region for the benefit of small and marginal farmers. We begin by discussing the problems of an existing unreliable 1 sq km sparse network deployed in a village. The proposed solutions are implemented in a new cluster. The new cluster is a reliable 5 sq km network. Our contributions are two fold. Firstly, we describe a. novel methodology to deploy a sparse reliable data gathering sensor network and evaluate the ``safe distance'' or ``reliable'' distance between nodes using propagation models. Secondly, we address the problem of transporting data from rural aggregation servers to urban data centres. This paper tracks our steps in deploying a sensor network in a village,in India, trying to provide better diagnosis for better crop management. Keywords - Rural, Agriculture, CTRS, Sparse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The history of software development in a somewhat systematical way has been performed for half a century. Despite this time period, serious failures in software development projects still occur. The pertinent mission of software project management is to continuously achieve more and more successful projects. The application of agile software methods and more recently the integration of Lean practices contribute to this trend of continuous improvement in the software industry. One such area warranting proper empirical evidence is the operational efficiency of projects. In the field of software development, Kanban as a process management method has gained momentum recently, mostly due to its linkages to Lean thinking. However, only a few empirical studies investigate the impacts of Kanban on projects in that particular area. The aim of this doctoral thesis is to improve the understanding of how Kanban impacts on software projects. The research is carried out in the area of Lean thinking, which contains a variety of concepts including Kanban. This article-type thesis conducts a set of case studies expanded with the research strategy of quasi-controlled experiment. The data-gathering techniques of interviews, questionnaires, and different types of observations are used to study the case projects, and thereby to understand the impacts of Kanban on software development projects. The research papers of the thesis are refereed, international journal and conference publications. The results highlight new findings regarding the application of Kanban in the software context. The key findings of the thesis suggest that Kanban is applicable to software development. Despite its several benefits reported in this thesis, the empirical evidence implies that Kanban is not all-encompassing but requires additional practices to keep development projects performing appropriately. Implications for research are given, as well. In addition to these findings, the thesis contributes in the area of plan-driven software development by suggesting implications both for research and practitioners. As a conclusion, Kanban can benefit software development projects but additional practices would increase its potential for the projects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper presents an adaptive Fourier filtering technique and a relaying scheme based on a combination of a digital band-pass filter along with a three-sample algorithm, for applications in high-speed numerical distance protection. To enhance the performance of above-mentioned technique, a high-speed fault detector has been used. MATLAB based simulation studies show that the adaptive Fourier filtering technique provides fast tripping for near faults and security for farther faults. The digital relaying scheme based on a combination of digital band-pass filter along with three-sample data window algorithm also provides accurate and high-speed detection of faults. The paper also proposes a high performance 16-bit fixed point DSP (Texas Instruments TMS320LF2407A) processor based hardware scheme suitable for implementation of the above techniques. To evaluate the performance of the proposed relaying scheme under steady state and transient conditions, PC based menu driven relay test procedures are developed using National Instruments LabVIEW software. The test signals are generated in real time using LabVIEW compatible analog output modules. The results obtained from the simulation studies as well as hardware implementations are also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many meteorological phenomena occur at different locations simultaneously. These phenomena vary temporally and spatially. It is essential to track these multiple phenomena for accurate weather prediction. Efficient analysis require high-resolution simulations which can be conducted by introducing finer resolution nested simulations, nests at the locations of these phenomena. Simultaneous tracking of these multiple weather phenomena requires simultaneous execution of the nests on different subsets of the maximum number of processors for the main weather simulation. Dynamic variation in the number of these nests require efficient processor reallocation strategies. In this paper, we have developed strategies for efficient partitioning and repartitioning of the nests among the processors. As a case study, we consider an application of tracking multiple organized cloud clusters in tropical weather systems. We first present a parallel data analysis algorithm to detect such clouds. We have developed a tree-based hierarchical diffusion method which reallocates processors for the nests such that the redistribution cost is less. We achieve this by a novel tree reorganization approach. We show that our approach exhibits up to 25% lower redistribution cost and 53% lesser hop-bytes than the processor reallocation strategy that does not consider the existing processor allocation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction [pdf, 0.27 MB] Methods [pdf, 0.15 MB] Results and discussion [pdf, 2.1 MB] Conclusions [pdf, 0.12 MB] Appendix A: Data gathering review, results and balancing [pdf, 0.3 MB] Appendix B: Data tables [pdf, 0.35 MB] Appendix C: BASS Workshop on the "Development of a conceptual model of the subarctic Pacific Basin ecosystems" [pdf, 0.16 MB] Appendix D: BASS/MODEL Workshop on "Higher trohic level modeling" [pdf, 0.24 MB] Appendix E: BASS/MODEL Workshop to review ecosystem models for the subarctic Pacific gyres [pdf, 4.39 MB] Appendix F: BASS/MODEL Workshop on "Perturbation analysis" on subarctic Pacific gyre ecosystem models using ECOPATH/ECOSIM" [pdf, 0.37 MB] Appendix G: Proposal for a BASS Workshop on "Linkages between open and coastal systems" [pdf, 0.15 MB] References [pdf, 0.14 MB] (97 page document)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first chapter of this thesis deals with automating data gathering for single cell microfluidic tests. The programs developed saved significant amounts of time with no loss in accuracy. The technology from this chapter was applied to experiments in both Chapters 4 and 5.

The second chapter describes the use of statistical learning to prognose if an anti-angiogenic drug (Bevacizumab) would successfully treat a glioblastoma multiforme tumor. This was conducted by first measuring protein levels from 92 blood samples using the DNA-encoded antibody library platform. This allowed the measure of 35 different proteins per sample, with comparable sensitivity to ELISA. Two statistical learning models were developed in order to predict whether the treatment would succeed. The first, logistic regression, predicted with 85% accuracy and an AUC of 0.901 using a five protein panel. These five proteins were statistically significant predictors and gave insight into the mechanism behind anti-angiogenic success/failure. The second model, an ensemble model of logistic regression, kNN, and random forest, predicted with a slightly higher accuracy of 87%.

The third chapter details the development of a photocleavable conjugate that multiplexed cell surface detection in microfluidic devices. The method successfully detected streptavidin on coated beads with 92% positive predictive rate. Furthermore, chambers with 0, 1, 2, and 3+ beads were statistically distinguishable. The method was then used to detect CD3 on Jurkat T cells, yielding a positive predictive rate of 49% and false positive rate of 0%.

The fourth chapter talks about the use of measuring T cell polyfunctionality in order to predict whether a patient will succeed an adoptive T cells transfer therapy. In 15 patients, we measured 10 proteins from individual T cells (~300 cells per patient). The polyfunctional strength index was calculated, which was then correlated with the patient's progress free survival (PFS) time. 52 other parameters measured in the single cell test were correlated with the PFS. No statistical correlator has been determined, however, and more data is necessary to reach a conclusion.

Finally, the fifth chapter talks about the interactions between T cells and how that affects their protein secretion. It was observed that T cells in direct contact selectively enhance their protein secretion, in some cases by over 5 fold. This occurred for Granzyme B, Perforin, CCL4, TNFa, and IFNg. IL- 10 was shown to decrease slightly upon contact. This phenomenon held true for T cells from all patients tested (n=8). Using single cell data, the theoretical protein secretion frequency was calculated for two cells and then compared to the observed rate of secretion for both two cells not in contact, and two cells in contact. In over 90% of cases, the theoretical protein secretion rate matched that of two cells not in contact.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente estudo, como investigação acadêmica de abordagem multidisciplinar, tem como objetivo geral contribuir para a identificação de possíveis caminhos que minimizem as discrepâncias verificadas entre os valores contábil e econômico das marcas. Neste contexto, em que o Brasil vivencia a transição para o International Financial Reporting Standards (IFRS), faz-se mister investigar a relação destas normas com a evidenciação da significância econômica desses bens, com o intuito de analisar as restrições normativas à contabilização das marcas, e identificar causas e impactos dessas discrepâncias. A pesquisa, de características essencialmente exploratória e qualitativa, aborda os conceitos teóricos de Ativos Intangíveis, Marcas, Brand Equity, Brand Value e Normativa Contábil, e utiliza a consulta bibliográfica e a entrevista presencial como formas de coleta de dados. No estudo, foram comparados os valores contábil e econômico das 31 marcas brasileiras listadas nos rankings de marcas mais valiosas divulgados em 2011 pelas três principais consultorias especializadas em avaliação de marcas; e entrevistadas 11 pessoas entre acadêmicos e profissionais das áreas Contábil, Econômica, Administrativa e Financeira, com o propósito primário de entender a essência da problemática da evidenciação contábil da marca. Como resultado, identificou-se que a questão basilar para a solução desse problema reside na confiabilidade da mensuração do valor monetário da marca, e que, como ainda não há consenso científico sobre a aplicabilidade dos métodos disponíveis na literatura aos diferentes propósitos de avaliação desse bem, é preciso que se desenvolvam novos e contínuos estudos a fim de identificar um padrão metodológico consistente, uniforme e objetivo que melhor se aplique aos interesses da Contabilidade como instrumento de gestão.