880 resultados para Iterative decoding
Resumo:
Single-photon emission computed tomography (SPECT) is a non-invasive imaging technique, which provides information reporting the functional states of tissues. SPECT imaging has been used as a diagnostic tool in several human disorders and can be used in animal models of diseases for physiopathological, genomic and drug discovery studies. However, most of the experimental models used in research involve rodents, which are at least one order of magnitude smaller in linear dimensions than man. Consequently, images of targets obtained with conventional gamma-cameras and collimators have poor spatial resolution and statistical quality. We review the methodological approaches developed in recent years in order to obtain images of small targets with good spatial resolution and sensitivity. Multipinhole, coded mask- and slit-based collimators are presented as alternative approaches to improve image quality. In combination with appropriate decoding algorithms, these collimators permit a significant reduction of the time needed to register the projections used to make 3-D representations of the volumetric distribution of target’s radiotracers. Simultaneously, they can be used to minimize artifacts and blurring arising when single pinhole collimators are used. Representation images are presented, which illustrate the use of these collimators. We also comment on the use of coded masks to attain tomographic resolution with a single projection, as discussed by some investigators since their introduction to obtain near-field images. We conclude this review by showing that the use of appropriate hardware and software tools adapted to conventional gamma-cameras can be of great help in obtaining relevant functional information in experiments using small animals.
Resumo:
This study is done to examine waste power plant’s optimal processing chain and it is important to consider from several points of view on why one option is better than the other. This is to insure that the right decision is made. Incineration of waste has devel-oped to be one decent option for waste disposal. There are several legislation matters and technical options to consider when starting up a waste power plant. From the tech-niques pretreatment, burner and flue gas cleaning are the biggest ones to consider. The treatment of incineration residues is important since it can be very harmful for the envi-ronment. The actual energy production from waste is not highly efficient and there are several harmful compounds emitted. Recycling of waste before incineration is not very typical and there are not many recycling options for materials that cannot be easily re-cycled to same product. Life cycle assessment is a good option for studying the envi-ronmental effect of the system. It has four phases that are part of the iterative study process. In this study the case environment is a waste power plant. The modeling of the plant is done with GaBi 6 software and the scope is from gate-to-grave. There are three different scenarios, from which the first and second are compared to each other to reach conclusions. Zero scenario is part of the study to demonstrate situation without the power plant. The power plant in this study is recycling some materials in scenario one and in scenario two even more materials and utilize the bottom ash more ways than one. The model has the substitutive processes for the materials when they are not recycled in the plant. The global warming potential results show that scenario one is the best option. The variable costs that have been considered tell the same result. The conclusion is that the waste power plant should not recycle more and utilize bottom ash in a number of ways. The area is not ready for that kind of utilization and production from recycled materials.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
The cellular structure of healthy food products, with added dietary fiber and low in calories, is an important factor that contributes to the assessment of quality, which can be quantified by image analysis of visual texture. This study seeks to compare image analysis techniques (binarization using Otsu’s method and the default ImageJ algorithm, a variation of the iterative intermeans method) for quantification of differences in the crumb structure of breads made with different percentages of whole-wheat flour and fat replacer, and discuss the behavior of the parameters number of cells, mean cell area, cell density, and circularity using response surface methodology. Comparative analysis of the results achieved with the Otsu and default ImageJ algorithms showed a significant difference between the studied parameters. The Otsu method demonstrated the crumb structure of the analyzed breads more reliably than the default ImageJ algorithm, and is thus the most suitable in terms of structural representation of the crumb texture.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
We have investigated Russian children’s reading acquisition during an intermediate period in their development: after literacy onset, but before they have acquired well-developed decoding skills. The results of our study suggest that Russian first graders rely primarily on phonemes and syllables as reading grain-size units. Phonemic awareness seems to have reached the metalinguistic level more rapidly than syllabic awareness after the onset of reading instruction, the reversal which is typical for the initial stages of formal reading instruction creating external demand for phonemic awareness. Another reason might be the inherent instability of syllabic boundaries in Russian. We have shown that body-coda is a more natural representation of subsyllabic structure in Russian than onset-rime. We also found that Russian children displayed variability of syllable onset and offset decisions which can be attributed to the lack of congruence between syllabic and morphemic word division in Russian. We suggest that fuzziness of syllable boundary decisions is a sign of the transitional nature of this stage in the reading development and it indicates progress towards an awareness of morphologically determined closed syllables. Our study also showed that orthographic complexity exerts an influence on reading in Russian from the very start of reading acquisition. Besides, we found that Russian first graders experience fluency difficulties in reading orthographically simple words and nonwords of two and more syllables. The transition from monosyllabic to bisyllabic lexical items constitutes a certain threshold, for which the syllabic structure seemed to be of no difference. When we compared the outcomes of the Russian children with the ones produced by speakers of other languages, we discovered that in the tasks which could be performed with the help of alphabetic recoding Russian children’s accuracy was comparable to that of children learning to read in relatively shallow orthographies. In tasks where this approach works only partially, Russian children demonstrated accuracy results similar to those in deeper orthographies. This pattern of moderate results in accuracy and excellent performance in terms of reaction times is an indication that children apply phonological recoding as their dominant strategy to various reading tasks and are only beginning to develop suitable multiple strategies in dealing with orthographically complex material. The development of these strategies is not completed during Grade 1 and the shift towards diversification of strategies apparently continues in Grade 2.
Resumo:
This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.
Resumo:
The aim of this master's thesis is to develop a two-dimensional drift-di usion model, which describes charge transport in organic solar cells. The main bene t of a two-dimensional model compared to a one-dimensional one is the inclusion of the nanoscale morphology of the active layer of a bulk heterojunction solar cell. The developed model was used to study recombination dynamics at the donor-acceptor interface. In some cases, it was possible to determine e ective parameters, which reproduce the results of the two-dimensional model in the one-dimensional case. A summary of the theory of charge transport in semiconductors was presented and discussed in the context of organic materials. Additionally, the normalization and discretization procedures required to nd a numerical solution to the charge transport problem were outlined. The charge transport problem was solved by implementing an iterative scheme called successive over-relaxation. The obtained solution is given as position-dependent electric potential, free charge carrier concentrations and current densities in the active layer. An interfacial layer, separating the pure phases, was introduced in order to describe charge dynamics occurring at the interface between the donor and acceptor. For simplicity, an e ective generation of free charge carriers in the interfacial layer was implemented. The pure phases simply act as transport layers for the photogenerated charges. Langevin recombination was assumed in the two-dimensional model and an analysis of the apparent recombination rate in the one-dimensional case is presented. The recombination rate in a two-dimensional model is seen to e ectively look like reduced Langevin recombination at open circuit. Replicating the J-U curves obtained in the two-dimensional model is, however, not possible by introducing a constant reduction factor in the Langevin recombination rate. The impact of an acceptor domain in the pure donor phase was investigated. Two cases were considered, one where the acceptor domain is isolated and another where it is connected to the bulk of the acceptor. A comparison to the case where no isolated domains exist was done in order to quantify the observed reduction in the photocurrent. The results show that all charges generated at the isolated domain are lost to recombination, but the domain does not have a major impact on charge transport. Trap-assisted recombination at interfacial trap states was investigated, as well as the surface dipole caused by the trapped charges. A theoretical expression for the ideality factor n_id as a function of generation was derived and shown to agree with simulation data. When the theoretical expression was fitted to simulation data, no interface dipole was observed.
Resumo:
This research looked at conditions which result in the development of integrated letter code information in the acquisition of reading vocabulary. Thirty grade three children of normal reading ability acquired new reading words in a Meaning Assigned task and a Letter Comparison task, and worked to increase skill for known reading words in a Copy task. The children were then assessed on their ability to identify the letters in these words. During the test each stimulus word for each child was exposed for 100 msec., after which each child reported as many of his or her letters as he or she could. Familiar words, new words, and a single letter identification task served as within subject controls. Following this, subjects were assessed for word meaning recall of the Meaning Assigned words and word reading times for words in all condi tions • The resul ts supported an episodic model of word recognition in which the overlap between the processing operations employed in encoding a word and those required when decoding it affected decoding performance. In particular, the Meaning Assigned and Copy tasks. appeared to facilitate letter code accessibility and integration in new and familiar words respectively. Performance in the Letter Comparison task, on the other hand, suggested that subjects can process the elements of a new word without integrating them into its lexical structure. It was concluded that these results favour an episodic model of word recognition.
Resumo:
The present study explored processing strategies used by individuals when they begin to read c;l script. Stimuli were artificial words created from symbols and based on an alphabetic system. The words were.presented to Grade Nine and Ten students, with variations included in the difficulty of orthography and word familiarity, and then scores were recorded on the mean number of trials for defined learning variables. Qualitative findings revealed that subjects 1 earned parts of the visual a'nd auditory features of words prior to hooking up the visual stimulus to the word's name. Performance measures-which appear to affect the rate of learning were as follows: auditory short-term memory, auditory delayed short-term memory, visual delayed short- term memory, and word attack or decod~ng skills. Qualitative data emerging in verbal reports by the subjects revealed that strategies they pefceived to use were, graphic, phonetic decoding and word .reading.
Resumo:
This qualitative inquiry used case study methodology to explore the change processes of 3 primary-grade teachers throughout their participation in 7 -month professional learning initiative focused on reading assessment and instruction. Participants took part in semimonthly inquiry-based professional learning community sessions, as well as concurrent individualized classroom-based literacy coaching. Each participant's experiences were first analyzed as a single case study, followed by cross-case analyses. While their patterns of professional growth differed, findings documented how all participants altered their understandings of the roles and relevancy of individual components of reading instruction (e.g., comprehension, decoding) and instructional approaches to scaffold students' growth (e.g., levelled text, strategy instruction), and experienced some form of conceptual change. Factors identified as affecting their change processes included; motivation, professional knowledge, professional beliefs (self-efficacy and theoretical orientation), resources (e.g., time, support), differentiated professional learning with associated goal-setting, and uncontrollable influences, with the affect of each factor compounded by interaction with the others. Comparison of participants' experiences to the Cognitive-Affective Model of Conceptual Change (CAMCC) and the Interconnected Model of Teacher Professional Growth (IMTPG) demonstrated the applicability of using both conceptual models, with the IMTPG providing macrolevel insights over time and the CAMCC microlevel insights at each change intervaL Recommendations include the provision of differentiated teacher professional learning opportunities, as well as research documenting the effects of teacher mentorship programs and the professional growth of teacher educators. ii
Resumo:
The purpose of this project is to provide social service practitioners with tools and perspectives to engage young people in a process of developing and connecting with their own personal narratives, and storytelling with others. This project extensively reviews the literature to explore Why Story, What Is Story, Future Directions of Story, and Challenges of Story. Anchoring this exploration is Freire’s (1970/2000) intentional uncovering and decoding. Taking a phenomenological approach, I draw additionally on Brookfield’s (1995) critical reflection; Delgado (1989) and McLaren (1998) for subversive narrative; and Robin (2008) and Sadik (2008) for digital storytelling. The recommendations provided within this project include a practical model built upon Baxter Magolda and King’s (2004) process towards self-authorship for engaging an exercise of storytelling that is accessible to practitioners and young people alike. A personal narrative that aims to help connect lived experience with the theoretical content underscores this project. I call for social service practitioners to engage their own personal narratives in an inclusive and purposeful storytelling method that enhances their ability to help the young people they serve develop and share their stories.
Resumo:
Un fichier intitulé Charbonneau_Nathalie_2008_AnimationAnnexeT accompagne la thèse. Il contient une séquence animée démontrant le type de parcours pouvant être effectué au sein des environnements numériques développés. Il s'agit d'un fichier .wmv qui a été compressé.
Resumo:
En 2004, le gouvernement québécois s’est engagé dans une importante réorganisation de son système de santé en créant les Centres de santé et des services sociaux (CSSS). Conjugué à leur mandat de production de soins et services, les CSSS se sont vus attribuer un nouveau mandat de « responsabilité populationnelle ». Les gestionnaires se voient donc attribuer le mandat d’améliorer la santé et le bien-être d’une population définie géographiquement, en plus de répondre aux besoins des utilisateurs de soins et services. Cette double responsabilité demande aux gestionnaires d’articuler plus formellement au sein d’une gouverne locale, deux secteurs de prestations de services qui ont longtemps évolué avec peu d’interactions, « la santé publique » et « le système de soins ». Ainsi, l’incorporation de la responsabilité populationnelle amène à développer une plus grande synergie entre ces deux secteurs dans une organisation productrice de soins et services. Elle appelle des changements importants au niveau des domaines d’activités investis et demande des transformations dans certains rôles de gestion. L’objectif général de ce projet de recherche est de mieux comprendre comment le travail des gestionnaires des CSSS se transforme en situation de changement mandaté afin d’incorporer la responsabilité populationnelle dans leurs actions et leurs pratiques de gestion. Le devis de recherche s’appuie sur deux études de cas. Nous avons réalisé une étude de deux CSSS de la région de Montréal. Ces cas ont été choisis selon la variabilité des contextes socio-économiques et sanitaires ainsi que le nombre et la variété d’établissements sous la gouverne des CSSS. L’un des cas avait au sein de sa gouverne un Centre hospitalier de courte durée et l’autre non. La collecte de données se base sur trois sources principales; 1) l’analyse documentaire, 2) des entrevues semi-structurées (N=46) et 3) des observations non-participantes sur une période de près de deux ans (2005-2007). Nous avons adopté une démarche itérative, basée sur un raisonnement inductif. Pour analyser la transformation des CSSS, nous nous appuyons sur la théorie institutionnelle en théorie des organisations. Cette perspective est intéressante car elle permet de lier l’analyse du champ organisationnel, soit les différentes pressions issues des acteurs gravitant dans le système de santé québécois et le rôle des acteurs dans le processus de changement. Elle propose d’analyser à la fois les pressions environnementales qui expliquent les contraintes et les opportunités des acteurs gravitant dans le champ organisationnel de même que les pressions exercées par les CSSS et les stratégies d’actions locales que ceux-ci développent. Nous discutons de l’évolution des CSSS en présentant trois phases temporelles caractérisées par des dynamiques d’interaction entre les pressions exercées par les CSSS et celles exercées par les autres acteurs du champ organisationnel; la phase 1 porte sur l’appropriation des politiques dictées par l’État, la phase 2 réfère à l’adaptation aux orientations proposées par différents acteurs du champ organisationnel et la phase 3 correspond au développement de certains projets initiés localement. Nous montrons à travers le processus d’incorporation de la responsabilité populationnelle que les gestionnaires modifient certaines pratiques de gestion. Certains de ces rôles sont plus en lien avec la notion d’entrepreneur institutionnel, notamment, le rôle de leader, de négociateur et d’entrepreneur. À travers le processus de transformation de ces rôles, d’importants changements au niveau des actions entreprises par les CSSS se réalisent, notamment, l’organisation des services de première ligne, le développement d’interventions de prévention et de promotion de la santé de même qu’un rôle plus actif au sein de leur communauté. En conclusion, nous discutons des leçons tirées de l’incorporation de la responsabilité populationnelle au niveau d’une organisation productrice de soins et services. Nous échangeons sur les enjeux liés au développement d’une plus grande synergie entre la santé publique et le système de soins au sein d’une gouverne locale. Également, nous présentons un modèle synthèse d’un processus de mise en œuvre d’un changement mandaté dans un champ organisationnel fortement institutionnalisé en approfondissant les rôles des entrepreneurs institutionnels dans ce processus. Cette situation a été peu analysée dans la littérature jusqu’à maintenant.
Resumo:
Cette thèse porte sur les représentations sociales. Fruit d’un bricolage conceptuel, ces représentations s’inspirent en partie des travaux de Serge Moscovici et de certains auteurs plus contemporains qui s’inscrivent dans son prolongement, dits de l’école française des représentations sociales, ainsi que d’auteurs anglo-saxons qui travaillent à partir de ce concept. Les écrits d’autres chercheurs, dont Stuart Hall, Richard Dyer et Jean-Michel Berthelot, qui adoptent dans des perspectives plus particulièrement liées aux Cultural Studies et à la sociologie ont également aidé à préciser notre façon d’envisager les représentations sociales et d’appréhender leur fonctionnement. Plus précisément, à la suite de Jodelet (1989), nous envisageons les représentations comme des « formes de connaissances socialement élaborées et partagées, ayant une visée pratique et concourant à la construction d’une réalité commune à un ensemble social » (p. 36). Ces représentations possèdent également d’autres particularités. Elles sont, d’après nous, constitutives ainsi que formées par des procédés langagiers qui rendent possibles des opérations. Ce concept nous permet d’étudier les représentations du point de vue de leur effectivité, soit de leur capacité à influencer les significations, à apporter un changement dans la manière d’interpréter une situation et, ce faisant, d’affecter les pratiques et d’induire une différence dans le monde. Ce questionnement au sujet des représentations se déploie sur un terrain qui nous semblait particulièrement riche pour en étudier le fonctionnement, soit celui de la politique qui, par ailleurs, se déroule actuellement dans un contexte de spectacularisation. Présenté comme un brouillage des genres entre divertissement et politique, ce phénomène est également lié à l’avènement de la celebrity politics, à la personnalisation et à l’évaluation, à l’importance prise par le style en politique ainsi qu’à la dramatisation, la fragmentation et la normalisation. Plus précisément, nous étudions les représentations dans un contexte de spectacularisation à partir de trois corpus documentant des événements aussi distincts que les fusions municipales en 2001, la montée en popularité de Mario Dumont et de l’ADQ en 2002 et 2003 ainsi que la série Bunker, le cirque, diffusée à la télévision de Radio-Canada à l’automne 2002. Ces corpus regroupent des textes de sources et de format variés, des textes de loi aux éditoriaux en passant par des dramatiques télévisuelles et des forums électroniques. Nous y avons effectué une analyse itérative et transversale des discours afin de mieux comprendre le fonctionnement des représentations dans un contexte de spectacularisation. Nos analyses ont démontré la variété des procédés et des opérations, telles que l’incontestabilisation, la projection, la localisation, l’amplification, la réduction et l’évaluation, qui permettent de modifier le sens et les enjeux des événements discutés. Les analyses ont également permis d’illustrer que les procédés et les opérations qu’ils rendent possibles balisent les frontières de l’objet et offrent un système classificateur.