767 resultados para real world learning
Resumo:
Les ombres sont un élément important pour la compréhension d'une scène. Grâce à elles, il est possible de résoudre des situations autrement ambigües, notamment concernant les mouvements, ou encore les positions relatives des objets de la scène. Il y a principalement deux types d'ombres: des ombres dures, aux limites très nettes, qui résultent souvent de lumières ponctuelles ou directionnelles; et des ombres douces, plus floues, qui contribuent à l'atmosphère et à la qualité visuelle de la scène. Les ombres douces résultent de grandes sources de lumière, comme des cartes environnementales, et sont difficiles à échantillonner efficacement en temps réel. Lorsque l'interactivité est prioritaire sur la qualité, des méthodes d'approximation peuvent être utilisées pour améliorer le rendu d'une scène à moindre coût en temps de calcul. Nous calculons interactivement les ombres douces résultant de sources de lumière environnementales, pour des scènes composées d'objets en mouvement et d'un champ de hauteurs dynamique. Notre méthode enrichit la méthode d'exponentiation des harmoniques sphériques, jusque là limitée aux bloqueurs sphériques, pour pouvoir traiter des champs de hauteurs. Nous ajoutons également une représentation pour les BRDFs diffuses et glossy. Nous pouvons ainsi combiner les visibilités et BRDFs dans un même espace, afin de calculer efficacement les ombres douces et les réflexions de scènes complexes. Un algorithme hybride, qui associe les visibilités en espace écran et en espace objet, permet de découpler la complexité des ombres de la complexité de la scène.
Resumo:
L’implantation de programmes probants dans les milieux d’intervention peut comporter son lot de difficultés pour les gestionnaires ainsi que les intervenants en contexte de réadaptation pour adolescents. En effet, les contraintes auxquelles peuvent être confrontés les milieux de pratique mènent parfois à la modification des programmes, ceci en vue de faciliter leur implantation. Il devient alors important de documenter ainsi qu’identifier l’effet des éléments associés à la fidélité d’implantation lorsque les programmes d’intervention sont évalués. En plus d’évaluer l’effet du degré d’exposition au programme cognitif-comportemental implanté dans les unités d’hébergement du Centre jeunesse de Montréal – Institut universitaire (CJM-IU) sur l’ampleur des troubles de comportement des adolescentes, ce mémoire propose une nouvelle piste de recherche. Puisque la recherche empirique ne permet pas encore d’identifier les conditions selon lesquelles il serait possible de modifier les programmes d’intervention qui sont adoptés dans le contexte de la pratique, cette étude propose d’élaborer une logique d’exposition au programme qui s’inspire des principes d’intervention efficace élaborés par Andrews et ses collègues (1990). Cette approche permettrait d’adapter le niveau d’intervention aux caractéristiques de la clientèle, et ce, tout en s’assurant de l’efficacité du programme cognitif-comportemental. L’échantillon de cette étude est donc constitué de 74 adolescentes hébergées au CJM-IU pour une durée de six mois. Les résultats indiquent d’abord que les activités du programme cognitif-comportemental ont été appliquées de façon plutôt irrégulière et bien en deçà de la fréquence initialement prévue, ce qui rend bien compte des difficultés à implanter des programmes en contexte de pratique. Les résultats suggèrent aussi une diminution de l’ampleur des troubles de comportement six mois après l’admission au CJM-IU pour les adolescentes qui étaient caractérisées par une ampleur des troubles de comportement plus marquée au moment de leur admission et qui ont complété un plus grand nombre d’auto-observations durant leur placement.
Resumo:
Contexte. Le paludisme provoque annuellement le décès d’environ 25 000 enfants de moins de cinq ans au Burkina Faso. Afin d’améliorer un accès rapide à des traitements efficaces, les autorités burkinabées ont introduit en 2010 la prise en charge du paludisme par les agents de santé communautaires (ASC). Alors que son efficacité a été démontrée dans des études contrôlées, très peu d’études ont évalué cette stratégie implantée dans des conditions naturelles et à l’échelle nationale. Objectif. L’objectif central de cette thèse est d’évaluer, dans des conditions réelles d’implantation, les effets du programme burkinabé de prise en charge communautaire du paludisme sur le recours aux soins des enfants fébriles. Les objectifs spécifiques sont : (1) de sonder les perceptions des ASC à l’égard du programme et explorer les facteurs contextuels susceptibles d’affecter leur performance ; (2) d’estimer le recours aux ASC par les enfants fébriles et identifier ses déterminants ; (3) de mesurer, auprès des enfants fébriles, le changement des pratiques de recours aux soins induit par l’introduction d’une intervention concomitante – la gratuité des soins dans les centres de santé. Méthodes. L’étude a été conduite dans deux districts sanitaires similaires, Kaya et Zorgho. Le devis d’évaluation combine des volets qualitatifs et quantitatifs. Des entrevues ont été menées avec tous les ASC de la zone à l’étude (N=27). Des enquêtes ont été répétées annuellement entre 2011 et 2013 auprès de 3002 ménages sélectionnés aléatoirement. Les pratiques de recours aux soins de tous les enfants de moins de cinq ans ayant connu un récent épisode de maladie ont été étudiées (N2011=707 ; N2012=787 ; N2013=831). Résultats. Les résultats montrent que le recours aux ASC est très modeste en comparaison de précédentes études réalisées dans des milieux contrôlés. Des obstacles liés à l’implantation du programme de prise en charge communautaire du paludisme ont été identifiés ainsi qu’un défaut de faisabilité dans les milieux urbains. Enfin, l’efficacité du programme communautaire a été négativement affectée par l’introduction de la gratuité dans les centres de santé. Conclusion. La prise en charge communautaire du paludisme rencontre au Burkina Faso des obstacles importants de faisabilité et d’implantation qui compromettent son efficacité potentielle pour réduire la mortalité infantile. Le manque de coordination entre le programme et des interventions locales concomitantes peut générer des effets néfastes et inattendus.
Resumo:
Artificial neural networks (ANNs) are relatively new computational tools that have found extensive utilization in solving many complex real-world problems. This paper describes how an ANN can be used to identify the spectral lines of elements. The spectral lines of Cadmium (Cd), Calcium (Ca), Iron (Fe), Lithium (Li), Mercury (Hg), Potassium (K) and Strontium (Sr) in the visible range are chosen for the investigation. One of the unique features of this technique is that it uses the whole spectrum in the visible range instead of individual spectral lines. The spectrum of a sample taken with a spectrometer contains both original peaks and spurious peaks. It is a tedious task to identify these peaks to determine the elements present in the sample. ANNs capability of retrieving original data from noisy spectrum is also explored in this paper. The importance of the need of sufficient data for training ANNs to get accurate results is also emphasized. Two networks are examined: one trained in all spectral lines and other with the persistent lines only. The network trained in all spectral lines is found to be superior in analyzing the spectrum even in a noisy environment.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
In this thesis, the applications of the recurrence quantification analysis in metal cutting operation in a lathe, with specific objective to detect tool wear and chatter, are presented.This study is based on the discovery that process dynamics in a lathe is low dimensional chaotic. It implies that the machine dynamics is controllable using principles of chaos theory. This understanding is to revolutionize the feature extraction methodologies used in condition monitoring systems as conventional linear methods or models are incapable of capturing the critical and strange behaviors associated with the metal cutting process.As sensor based approaches provide an automated and cost effective way to monitor and control, an efficient feature extraction methodology based on nonlinear time series analysis is much more demanding. The task here is more complex when the information has to be deduced solely from sensor signals since traditional methods do not address the issue of how to treat noise present in real-world processes and its non-stationarity. In an effort to get over these two issues to the maximum possible, this thesis adopts the recurrence quantification analysis methodology in the study since this feature extraction technique is found to be robust against noise and stationarity in the signals.The work consists of two different sets of experiments in a lathe; set-I and set-2. The experiment, set-I, study the influence of tool wear on the RQA variables whereas the set-2 is carried out to identify the sensitive RQA variables to machine tool chatter followed by its validation in actual cutting. To obtain the bounds of the spectrum of the significant RQA variable values, in set-i, a fresh tool and a worn tool are used for cutting. The first part of the set-2 experiments uses a stepped shaft in order to create chatter at a known location. And the second part uses a conical section having a uniform taper along the axis for creating chatter to onset at some distance from the smaller end by gradually increasing the depth of cut while keeping the spindle speed and feed rate constant.The study concludes by revealing the dependence of certain RQA variables; percent determinism, percent recurrence and entropy, to tool wear and chatter unambiguously. The performances of the results establish this methodology to be viable for detection of tool wear and chatter in metal cutting operation in a lathe. The key reason is that the dynamics of the system under study have been nonlinear and the recurrence quantification analysis can characterize them adequately.This work establishes that principles and practice of machining can be considerably benefited and advanced from using nonlinear dynamics and chaos theory.
Resumo:
Timely detection of sudden change in dynamics that adversely affect the performance of systems and quality of products has great scientific relevance. This work focuses on effective detection of dynamical changes of real time signals from mechanical as well as biological systems using a fast and robust technique of permutation entropy (PE). The results are used in detecting chatter onset in machine turning and identifying vocal disorders from speech signal.Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. Here we propose the use of permutation entropy (PE), to detect the dynamical changes in two non linear processes, turning under mechanical system and speech under biological system.Effectiveness of PE in detecting the change in dynamics in turning process from the time series generated with samples of audio and current signals is studied. Experiments are carried out on a lathe machine for sudden increase in depth of cut and continuous increase in depth of cut on mild steel work pieces keeping the speed and feed rate constant. The results are applied to detect chatter onset in machining. These results are verified using frequency spectra of the signals and the non linear measure, normalized coarse-grained information rate (NCIR).PE analysis is carried out to investigate the variation in surface texture caused by chatter on the machined work piece. Statistical parameter from the optical grey level intensity histogram of laser speckle pattern recorded using a charge coupled device (CCD) camera is used to generate the time series required for PE analysis. Standard optical roughness parameter is used to confirm the results.Application of PE in identifying the vocal disorders is studied from speech signal recorded using microphone. Here analysis is carried out using speech signals of subjects with different pathological conditions and normal subjects, and the results are used for identifying vocal disorders. Standard linear technique of FFT is used to substantiate thc results.The results of PE analysis in all three cases clearly indicate that this complexity measure is sensitive to change in regularity of a signal and hence can suitably be used for detection of dynamical changes in real world systems. This work establishes the application of the simple, inexpensive and fast algorithm of PE for the benefit of advanced manufacturing process as well as clinical diagnosis in vocal disorders.
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Resumo:
For routing problems in interconnection networks it is important to find the shortest containers between any two vertices, since the w-wide diameter gives the maximum communication delay when there are up to w−1 faulty nodes in a network modeled by a graph. The concept of ‘wide diameter’ was introduced by Hsu [41] to unify the concepts of diameter and The concept of ‘domination’ has attracted interest due to its wide applications in many real world situations [38]. A connected dominating set serves as a virtual backbone of a network and it is a set of vertices that helps in routing. In this thesis, we make an earnest attempt to study some of these notions in graph products. This include, the diameter variability, the diameter vulnerability, the component factors and the domination criticality.connectivity
Resumo:
Image processing has been a challenging and multidisciplinary research area since decades with continuing improvements in its various branches especially Medical Imaging. The healthcare industry was very much benefited with the advances in Image Processing techniques for the efficient management of large volumes of clinical data. The popularity and growth of Image Processing field attracts researchers from many disciplines including Computer Science and Medical Science due to its applicability to the real world. In the meantime, Computer Science is becoming an important driving force for the further development of Medical Sciences. The objective of this study is to make use of the basic concepts in Medical Image Processing and develop methods and tools for clinicians’ assistance. This work is motivated from clinical applications of digital mammograms and placental sonograms, and uses real medical images for proposing a method intended to assist radiologists in the diagnostic process. The study consists of two domains of Pattern recognition, Classification and Content Based Retrieval. Mammogram images of breast cancer patients and placental images are used for this study. Cancer is a disaster to human race. The accuracy in characterizing images using simplified user friendly Computer Aided Diagnosis techniques helps radiologists in detecting cancers at an early stage. Breast cancer which accounts for the major cause of cancer death in women can be fully cured if detected at an early stage. Studies relating to placental characteristics and abnormalities are important in foetal monitoring. The diagnostic variability in sonographic examination of placenta can be overlooked by detailed placental texture analysis by focusing on placental grading. The work aims on early breast cancer detection and placental maturity analysis. This dissertation is a stepping stone in combing various application domains of healthcare and technology.
Resumo:
Cyber Physical systems (CPS) connect the physical world with cyber world. The events happening in the real world is enormous and most of it go unnoticed and information is lost. CPS enables to embed tiny smart devices to capture the data and send it to Internet for further processing. The entire set-up call for lots of challenges and open new research problems. This talk is a journey through the landscape of research problems in this emerging area.
Resumo:
There are several centrality measures that have been introduced and studied for real world networks. They account for the different vertex characteristics that permit them to be ranked in order of importance in the network. Betweenness centrality is a measure of the influence of a vertex over the flow of information between every pair of vertices under the assumption that information primarily flows over the shortest path between them. In this paper we present betweenness centrality of some important classes of graphs.
Resumo:
We deal with the numerical solution of heat conduction problems featuring steep gradients. In order to solve the associated partial differential equation a finite volume technique is used and unstructured grids are employed. A discrete maximum principle for triangulations of a Delaunay type is developed. To capture thin boundary layers incorporating steep gradients an anisotropic mesh adaptation technique is implemented. Computational tests are performed for an academic problem where the exact solution is known as well as for a real world problem of a computer simulation of the thermoregulation of premature infants.
Resumo:
As the number of resources on the web exceeds by far the number of documents one can track, it becomes increasingly difficult to remain up to date on ones own areas of interest. The problem becomes more severe with the increasing fraction of multimedia data, from which it is difficult to extract some conceptual description of their contents. One way to overcome this problem are social bookmark tools, which are rapidly emerging on the web. In such systems, users are setting up lightweight conceptual structures called folksonomies, and overcome thus the knowledge acquisition bottleneck. As more and more people participate in the effort, the use of a common vocabulary becomes more and more stable. We present an approach for discovering topic-specific trends within folksonomies. It is based on a differential adaptation of the PageRank algorithm to the triadic hypergraph structure of a folksonomy. The approach allows for any kind of data, as it does not rely on the internal structure of the documents. In particular, this allows to consider different data types in the same analysis step. We run experiments on a large-scale real-world snapshot of a social bookmarking system.
Resumo:
In der Arbeit werden einige Resultate von vergleichenden empirischen Untersuchungen zu unterschiedlichen Konzeptionen eines realitätsbezogenen Mathematikunterrichts, wie sie in England und Deutschland häufig vertreten werden, dargestellt. Bei diesen Untersuchungen werden in verschiedenen Fallstudien, die u.a. auch strukturelle Unterschiede zwischen den Bildungssystemen in England und Deutschland und den zugrundeliegenden Erziehungsphilosophien berücksichtigen, Auswirkungen dieser Konzeptionen auf die Einstellung der Lernenden zum Mathematikunterricht, ihr Bild von Mathematik, ihr Verständnis mathematischer Begriffe und Methoden sowie ihre Fähigkeiten zur Anwendung mathematischer Methoden zum Lösen realer Problemaufgaben untersucht. Die hier dargestellten Erhebungen sind Teil eines längerdauernden Kollaborationsprojekts zwischen den Universitäten Exeter und Kassel.