847 resultados para internet-based application components
Resumo:
La formation des sociétés fondées sur la connaissance, le progrès de la technologie de communications et un meilleur échange d'informations au niveau mondial permet une meilleure utilisation des connaissances produites lors des décisions prises dans le système de santé. Dans des pays en voie de développement, quelques études sont menées sur des obstacles qui empêchent la prise des décisions fondées sur des preuves (PDFDP) alors que des études similaires dans le monde développé sont vraiment rares. L'Iran est le pays qui a connu la plus forte croissance dans les publications scientifiques au cours de ces dernières années, mais la question qui se pose est la suivante : quels sont les obstacles qui empêchent l'utilisation de ces connaissances de même que celle des données mondiales? Cette étude embrasse trois articles consécutifs. Le but du premier article a été de trouver un modèle pour évaluer l'état de l'utilisation des connaissances dans ces circonstances en Iran à l’aide d'un examen vaste et systématique des sources suivie par une étude qualitative basée sur la méthode de la Grounded Theory. Ensuite au cours du deuxième et troisième article, les obstacles aux décisions fondées sur des preuves en Iran, sont étudiés en interrogeant les directeurs, les décideurs du secteur de la santé et les chercheurs qui travaillent à produire des preuves scientifiques pour la PDFDP en Iran. Après avoir examiné les modèles disponibles existants et la réalisation d'une étude qualitative, le premier article est sorti sous le titre de «Conception d'un modèle d'application des connaissances». Ce premier article sert de cadre pour les deux autres articles qui évaluent les obstacles à «pull» et «push» pour des PDFDP dans le pays. En Iran, en tant que pays en développement, les problèmes se situent dans toutes les étapes du processus de production, de partage et d’utilisation de la preuve dans la prise de décision du système de santé. Les obstacles qui existent à la prise de décision fondée sur des preuves sont divers et cela aux différents niveaux; les solutions multi-dimensionnelles sont nécessaires pour renforcer l'impact de preuves scientifiques sur les prises de décision. Ces solutions devraient entraîner des changements dans la culture et le milieu de la prise de décision afin de valoriser la prise de décisions fondées sur des preuves. Les critères de sélection des gestionnaires et leur nomination inappropriée ainsi que leurs remplaçants rapides et les différences de paiement dans les secteurs public et privé peuvent affaiblir la PDFDP de deux façons : d’une part en influant sur la motivation des décideurs et d'autre part en détruisant la continuité du programme. De même, tandis que la sélection et le remplacement des chercheurs n'est pas comme ceux des gestionnaires, il n'y a aucun critère pour encourager ces deux groupes à soutenir le processus décisionnel fondés sur des preuves dans le secteur de la santé et les changements ultérieurs. La sélection et la promotion des décideurs politiques devraient être basées sur leur performance en matière de la PDFDP et les efforts des universitaires doivent être comptés lors de leurs promotions personnelles et celles du rang de leur institution. Les attitudes et les capacités des décideurs et des chercheurs devraient être encouragés en leur donnant assez de pouvoir et d’habiliter dans les différentes étapes du cycle de décision. Cette étude a révélé que les gestionnaires n'ont pas suffisamment accès à la fois aux preuves nationales et internationales. Réduire l’écart qui sépare les chercheurs des décideurs est une étape cruciale qui doit être réalisée en favorisant la communication réciproque. Cette question est très importante étant donné que l'utilisation des connaissances ne peut être renforcée que par l'étroite collaboration entre les décideurs politiques et le secteur de la recherche. Dans ce but des programmes à long terme doivent être conçus ; la création des réseaux de chercheurs et de décideurs pour le choix du sujet de recherche, le classement des priorités, et le fait de renforcer la confiance réciproque entre les chercheurs et les décideurs politiques semblent être efficace.
Resumo:
L'ensemble de mon travail a été réalisé grâce a l'utilisation de logiciel libre.
Resumo:
Avec la complexité croissante des systèmes sur puce, de nouveaux défis ne cessent d’émerger dans la conception de ces systèmes en matière de vérification formelle et de synthèse de haut niveau. Plusieurs travaux autour de SystemC, considéré comme la norme pour la conception au niveau système, sont en cours afin de relever ces nouveaux défis. Cependant, à cause du modèle de concurrence complexe de SystemC, relever ces défis reste toujours une tâche difficile. Ainsi, nous pensons qu’il est primordial de partir sur de meilleures bases en utilisant un modèle de concurrence plus efficace. Par conséquent, dans cette thèse, nous étudions une méthodologie de conception qui offre une meilleure abstraction pour modéliser des composants parallèles en se basant sur le concept de transaction. Nous montrons comment, grâce au raisonnement simple que procure le concept de transaction, il devient plus facile d’appliquer la vérification formelle, le raffinement incrémental et la synthèse de haut niveau. Dans le but d’évaluer l’efficacité de cette méthodologie, nous avons fixé l’objectif d’optimiser la vitesse de simulation d’un modèle transactionnel en profitant d’une machine multicoeur. Nous présentons ainsi l’environnement de modélisation et de simulation parallèle que nous avons développé. Nous étudions différentes stratégies d’ordonnancement en matière de parallélisme et de surcoût de synchronisation. Une expérimentation faite sur un modèle du transmetteur Wi-Fi 802.11a a permis d’atteindre une accélération d’environ 1.8 en utilisant deux threads. Avec 8 threads, bien que la charge de travail des différentes transactions n’était pas importante, nous avons pu atteindre une accélération d’environ 4.6, ce qui est un résultat très prometteur.
Resumo:
This work describes a methodology for converting a specialized dictionary into a learner’s dictionary. The dictionary to which we apply our conversion method is the DiCoInfo, Dictionnaire fondamental de l’informatique et de l’Internet. We focus on changes affecting the presentation of data categories. What is meant by specialized dictionary for learners, in our case, is a dictionary covering the field of computer science and Internet meeting our users’ needs in communicative and cognitive situations. Our dictionary is aimed at learners’ of the computing language. We start by presenting a detailed description of four dictionaries for learners. We explain how the observations made on these resources have helped us in developing our methodology.In order to develop our methodology, first, based on Bergenholtz and Tarp’s works (Bergenholtz 2003; Tarp 2008; Fuertes Olivera and Tarp 2011), we defined the type of users who may use our dictionary. Translators are our first intended users. Other users working in the fields related to translation are also targeted: proofreaders, technical writers, interpreters. We also determined the use situations of our dictionary. It aims to assist the learners in solving text reception and text production problems (communicative situations) and in studying the terminology of computing (cognitive situations). Thus, we could establish its lexicographical functions: communicative and cognitive functions. Then, we extracted 50 articles from the DiCoInfo to which we applied a number of changes in different aspects: the layout, the presentation of data, the navigation and the use of multimedia. The changes were made according to two fundamental parameters: 1) simplification of the presentation; 2) lexicographic functions (which include the intended users and user’s situations). In this way, we exploited the widgets offered by the technology to update the interface and layout. Strategies have been developed to organize a large number of lexical links in a simpler way. We associated these links with examples showing their use in specific contexts. Multimedia as audio pronunciation and illustrations has been used.
Resumo:
Global Positioning System (GPS), with its high integrity, continuous availability and reliability, revolutionized the navigation system based on radio ranging. With four or more GPS satellites in view, a GPS receiver can find its location anywhere over the globe with accuracy of few meters. High accuracy - within centimeters, or even millimeters is achievable by correcting the GPS signal with external augmentation system. The use of satellite for critical application like navigation has become a reality through the development of these augmentation systems (like W AAS, SDCM, and EGNOS, etc.) with a primary objective of providing essential integrity information needed for navigation service in their respective regions. Apart from these, many countries have initiated developing space-based regional augmentation systems like GAGAN and IRNSS of India, MSAS and QZSS of Japan, COMPASS of China, etc. In future, these regional systems will operate simultaneously and emerge as a Global Navigation Satellite System or GNSS to support a broad range of activities in the global navigation sector.Among different types of error sources in the GPS precise positioning, the propagation delay due to the atmospheric refraction is a limiting factor on the achievable accuracy using this system. The WADGPS, aimed for accurate positioning over a large area though broadcasts different errors involved in GPS ranging including ionosphere and troposphere errors, due to the large temporal and spatial variations in different atmospheric parameters especially in lower atmosphere (troposphere), the use of these broadcasted tropospheric corrections are not sufficiently accurate. This necessitated the estimation of tropospheric error based on realistic values of tropospheric refractivity. Presently available methodologies for the estimation of tropospheric delay are mostly based on the atmospheric data and GPS measurements from the mid-latitude regions, where the atmospheric conditions are significantly different from that over the tropics. No such attempts were made over the tropics. In a practical approach when the measured atmospheric parameters are not available analytical models evolved using data from mid-latitudes for this purpose alone can be used. The major drawback of these existing models is that it neglects the seasonal variation of the atmospheric parameters at stations near the equator. At tropics the model underestimates the delay in quite a few occasions. In this context, the present study is afirst and major step towards the development of models for tropospheric delay over the Indian region which is a prime requisite for future space based navigation program (GAGAN and IRNSS). Apart from the models based on the measured surface parameters, a region specific model which does not require any measured atmospheric parameter as input, but depends on latitude and day of the year was developed for the tropical region with emphasis on Indian sector.Large variability of atmospheric water vapor content in short spatial and/or temporal scales makes its measurement rather involved and expensive. A local network of GPS receivers is an effective tool for water vapor remote sensing over the land. This recently developed technique proves to be an effective tool for measuring PW. The potential of using GPS to estimate water vapor in the atmosphere at all-weather condition and with high temporal resolution is attempted. This will be useful for retrieving columnar water vapor from ground based GPS data. A good network of GPS could be a major source of water vapor information for Numerical Weather Prediction models and could act as surrogate to the data gap in microwave remote sensing for water vapor over land.
Resumo:
The present thesis work focuses on hole doped lanthanum manganites and their thin film forms. Hole doped lanthanum manganites with higher substitutions of sodium are seldom reported in literature. Such high sodium substituted lanthanum manganites are synthesized and a detailed investigation on their structural and magnetic properties is carried out. Magnetic nature of these materials near room temperature is investigated explicitly. Magneto caloric application potential of these materials are also investigated. After a thorough investigation of the bulk samples, thin films of the bulk counterparts are also investigated. A magnetoelectric composite with ferroelectric and ferromagnetic components is developed using pulsed laser deposition and the variation in the magnetic and electric properties are investigated. It is established that such a composite could be realized as a potential field effect device. The central theme of this thesis is also on manganites and is with the twin objectives of a material study leading to the demonstration of a device. This is taken up for investigation. Sincere efforts are made to synthesize phase pure compounds. Their structural evaluation, compositional verification and evaluation of ferroelectric and ferromagnetic properties are also taken up. Thus the focus of this investigation is related to the investigation of a magnetoelectric and magnetocaloric application potentials of doped lanthanum manganites with sodium substitution. Bulk samples of sodium substituted lanthanum manganites. Bulk samples of sodium substituted lanthanum manganites with Na substitution ranging from 50 percent to 90 percent were synthesized using a modified citrate gel method and were found to be orthorhombic in structure belonging to a pbnm spacegroup. The variation in lattice parameters and unit cell volume with sodium concentration were also dealt with. Magnetic measurements revealed that magnetization decreased with increase in sodium concentrations.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.
Resumo:
Despite its young history, Computer Science Education has seen a number of "revolutions". Being a veteran in the field, the author reflects on the many changes he has seen in computing and its teaching. The intent of this personal collection is to point out that most revolutions came unforeseen and that many of the new learning initiatives, despite high financial input, ultimately failed. The author then considers the current revolution (MOOC, inverted lectures, peer instruction, game design) and, based on the lessons learned earlier, argues why video recording is so successful. Given the fact that this is the decade we lost print (papers, printed books, book shops, libraries), the author then conjectures that the impact of the Internet will make this revolution different from previous ones in that most of the changes are irreversible. As a consequence he warns against storming ahead blindly and suggests to conserve - while it is still possible - valuable components of what might soon be called the antebellum age of education.
Resumo:
This thesis describes a methodology, a representation, and an implemented program for troubleshooting digital circuit boards at roughly the level of expertise one might expect in a human novice. Existing methods for model-based troubleshooting have not scaled up to deal with complex circuits, in part because traditional circuit models do not explicitly represent aspects of the device that troubleshooters would consider important. For complex devices the model of the target device should be constructed with the goal of troubleshooting explicitly in mind. Given that methodology, the principal contributions of the thesis are ways of representing complex circuits to help make troubleshooting feasible. Temporally coarse behavior descriptions are a particularly powerful simplification. Instantiating this idea for the circuit domain produces a vocabulary for describing digital signals. The vocabulary has a level of temporal detail sufficient to make useful predictions abut the response of the circuit while it remains coarse enough to make those predictions computationally tractable. Other contributions are principles for using these representations. Although not embodied in a program, these principles are sufficiently concrete that models can be constructed manually from existing circuit descriptions such as schematics, part specifications, and state diagrams. One such principle is that if there are components with particularly likely failure modes or failure modes in which their behavior is drastically simplified, this knowledge should be incorporated into the model. Further contributions include the solution of technical problems resulting from the use of explicit temporal representations and design descriptions with tangled hierarchies.
Resumo:
We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes are then estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process. The proposed method is applied to a data set of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown. We further augment the shape model to incorporate structural features of interest; unknown structural parameters for a novel set of contours are then inferred via the Bayesian reconstruction process. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a data set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images.
Resumo:
We present an example-based learning approach for locating vertical frontal views of human faces in complex scenes. The technique models the distribution of human face patterns by means of a few view-based "face'' and "non-face'' prototype clusters. At each image location, the local pattern is matched against the distribution-based model, and a trained classifier determines, based on the local difference measurements, whether or not a human face exists at the current image location. We provide an analysis that helps identify the critical components of our system.
Resumo:
We present a component-based approach for recognizing objects under large pose changes. From a set of training images of a given object we extract a large number of components which are clustered based on the similarity of their image features and their locations within the object image. The cluster centers build an initial set of component templates from which we select a subset for the final recognizer. In experiments we evaluate different sizes and types of components and three standard techniques for component selection. The component classifiers are finally compared to global classifiers on a database of four objects.
Resumo:
In this paper we present a component based person detection system that is capable of detecting frontal, rear and near side views of people, and partially occluded persons in cluttered scenes. The framework that is described here for people is easily applied to other objects as well. The motivation for developing a component based approach is two fold: first, to enhance the performance of person detection systems on frontal and rear views of people and second, to develop a framework that directly addresses the problem of detecting people who are partially occluded or whose body parts blend in with the background. The data classification is handled by several support vector machine classifiers arranged in two layers. This architecture is known as Adaptive Combination of Classifiers (ACC). The system performs very well and is capable of detecting people even when all components of a person are not found. The performance of the system is significantly better than a full body person detector designed along similar lines. This suggests that the improved performance is due to the components based approach and the ACC data classification structure.
Resumo:
Recently, researchers have introduced the notion of super-peers to improve signaling efficiency as well as lookup performance of peer-to-peer (P2P) systems. In a separate development, recent works on applications of mobile ad hoc networks (MANET) have seen several proposals on utilizing mobile fleets such as city buses to deploy a mobile backbone infrastructure for communication and Internet access in a metropolitan environment. This paper further explores the possibility of deploying P2P applications such as content sharing and distributed computing, over this mobile backbone infrastructure. Specifically, we study how city buses may be deployed as a mobile system of super-peers. We discuss the main motivations behind our proposal, and outline in detail the design of a super-peer based structured P2P system using a fleet of city buses.