946 resultados para Rocket engines.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectif général de cette recherche doctorale est l’étude des déterminants de l’intégration pédagogique des technologies de l’information et de la communication (TIC) par les professeurs à l’Université de Ouagadougou (UO). Cela nous a conduit à étudier respectivement les compétences technologiques des professeurs, les facteurs de résistance contraignant l’intégration pédagogique des TIC par ces professeurs, l’acceptation et les usages spécifiques des TIC par les professeurs. Ce travail s’est bâti autour des concepts théoriques sur les usages éducatifs des TIC, les compétences technopédagogiques, les facteurs de résistance, l’acceptation des TIC et l’intégration pédagogique des TIC. Ces concepts se sont inscrits dans les cadres d’analyses des modèles d’intégration des TIC par les professeurs et des modèles d’acceptation et d’utilisation d’une nouvelle technologie. La stratégie d’analyse des données s’est construite autour des approches descriptives et analytiques notamment au moyen de l’utilisation de la psychométrie et/ou de l’économétrie des modèles à variables dépendantes limitées. Utilisant la recherche quantitative, le recrutement de 82 professeurs par avis de consentement à participer, a permis de collecter les données sur la base de questionnaires dont la majeure partie est bâtie autour de questions à échelle de Likert. L’étude des compétences technologiques des professeurs a permis d’une part, de dresser un portrait des usages des TIC par les professeurs. En effet, les usages les plus répandus des TIC dans cette université sont les logiciels de bureautique, les logiciels de messagerie électronique et de navigation dans Internet. Elle a aussi permis de faire un portrait des compétences technologiques des professeurs. Ceux-ci utilisent à la fois plusieurs logiciels et reconnaissent l’importance des TIC pour leurs tâches pédagogiques et de recherche même si leur degré de maîtrise perçue sur certaines des applications télématiques reste à des niveaux très bas. Par rapport à certaines compétences comme celles destinées à exploiter les TIC dans des situations de communication et de collaboration et celles destinée à rechercher et à traiter des informations à l’aide des TIC, les niveaux de maîtrise par les professeurs de ces compétences ont été très élevés. Les professeurs ont eu des niveaux de maîtrise très faibles sur les compétences destinées à créer des situations d’apprentissage à l’aide des TIC et sur celles destinées à développer et à diffuser des ressources d’apprentissage à l’aide des TIC malgré la grande importance que ceux-ci ont accordée à ces compétences avancées essentielles pour une intégration efficace et efficiente des TIC à leurs pratiques pédagogiques. L’étude des facteurs de résistance a permis d’ériger une typologie de ces facteurs. Ces facteurs vont des contraintes matérielles et infrastructurelles à celles liées aux compétences informatiques et à des contraintes liées à la motivation et à l’engagement personnel des professeurs, facteurs pouvant susciter des comportements de refus de la technologie. Ces facteurs sont entre autres, la compatibilité des TIC d’avec les tâches pédagogiques et de recherche des professeurs, l’utilité perçue des TIC pour les activités pédagogiques et de recherche, les facilités d’utilisation des TIC et la motivation ou l’engagement personnel des professeurs aux usages des TIC. Il y a aussi les coûts engendrés par l’accès aux TIC et le manque de soutien et d’assistance technique au plan institutionnel qui se sont révelés enfreindre le développement de ces usages parmi les professeurs. Les estimations des déterminants de l’acceptation et des usages éducatifs des TIC par les professeurs ont montré que c’est surtout « l’intention comportementale » d’aller aux TIC des professeurs, « l’expérience d’Internet » qui affectent positivement les usages éducatifs des TIC. Les « conditions de facilitation » qui représentent non seulement la qualité de l’infrastructure technologique, mais aussi l’existence d’un soutien institutionnel aux usages des TIC, ont affecté négativement ces usages. Des éléments de recommandation issus de ce travail s’orientent vers la formation des professeurs sur des compétences précises identifiées, l’amélioration de la qualité de l’infrastructure technologique existante, la création d’un logithèque, la mise en œuvre d’incitations institutionnelles adéquates telles que l’assistance technique régulière aux professeurs, l’allègement des volumes horaires statutaires des professeurs novateurs, la reconnaissance des efforts déjà réalisés par ces novateurs en matière d’usages éducatifs des TIC dans leur institution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Au cours des dernières années, le domaine de la consommation a grandement évolué. Les agents de marketing ont commencé à utiliser l’Internet pour influencer les consommateurs en employant des tactiques originales et imaginatives qui ont rendus possible l’atteinte d'un niveau de communication interpersonnelle qui avait précédemment été insondable. Leurs interactions avec les consommateurs, en utilisant la technologie moderne, se manifeste sous plusieurs formes différentes qui sont toutes accompagnés de leur propre assortiment de problèmes juridiques. D’abord, il n'est pas rare pour les agents de marketing d’utiliser des outils qui leur permettent de suivre les actions des consommateurs dans le monde virtuel ainsi que dans le monde physique. Les renseignements personnels recueillis d'une telle manière sont souvent utilisés à des fins de publicité comportementale en ligne – une utilisation qui ne respecte pas toujours les limites du droit à la vie privée. Il est également devenu assez commun pour les agents de marketing d’utiliser les médias sociaux afin de converser avec les consommateurs. Ces forums ont aussi servi à la commission d’actes anticoncurrentiels, ainsi qu’à la diffusion de publicités fausses et trompeuses – deux pratiques qui sont interdites tant par la loi sur la concurrence que la loi sur la protection des consommateurs. Enfin, les agents de marketing utilisent diverses tactiques afin de joindre les consommateurs plus efficacement en utilisant diverses tactiques qui les rendent plus visible dans les moteurs de recherche sur Internet, dont certaines sont considérés comme malhonnêtes et pourraient présenter des problèmes dans les domaines du droit de la concurrence et du droit des marques de commerce. Ce mémoire offre une description détaillée des outils utilisés à des fins de marketing sur Internet, ainsi que de la manière dont ils sont utilisés. Il illustre par ailleurs les problèmes juridiques qui peuvent survenir à la suite de leur utilisation et définit le cadre législatif régissant l’utilisation de ces outils par les agents de marketing, pour enfin démontrer que les lois qui entrent en jeu dans de telles circonstances peuvent, en effet, se révéler bénéfiques pour ces derniers d'un point de vue économique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les moteurs de recherche font partie de notre vie quotidienne. Actuellement, plus d’un tiers de la population mondiale utilise l’Internet. Les moteurs de recherche leur permettent de trouver rapidement les informations ou les produits qu'ils veulent. La recherche d'information (IR) est le fondement de moteurs de recherche modernes. Les approches traditionnelles de recherche d'information supposent que les termes d'indexation sont indépendants. Pourtant, les termes qui apparaissent dans le même contexte sont souvent dépendants. L’absence de la prise en compte de ces dépendances est une des causes de l’introduction de bruit dans le résultat (résultat non pertinents). Certaines études ont proposé d’intégrer certains types de dépendance, tels que la proximité, la cooccurrence, la contiguïté et de la dépendance grammaticale. Dans la plupart des cas, les modèles de dépendance sont construits séparément et ensuite combinés avec le modèle traditionnel de mots avec une importance constante. Par conséquent, ils ne peuvent pas capturer correctement la dépendance variable et la force de dépendance. Par exemple, la dépendance entre les mots adjacents "Black Friday" est plus importante que celle entre les mots "road constructions". Dans cette thèse, nous étudions différentes approches pour capturer les relations des termes et de leurs forces de dépendance. Nous avons proposé des méthodes suivantes: ─ Nous réexaminons l'approche de combinaison en utilisant différentes unités d'indexation pour la RI monolingue en chinois et la RI translinguistique entre anglais et chinois. En plus d’utiliser des mots, nous étudions la possibilité d'utiliser bi-gramme et uni-gramme comme unité de traduction pour le chinois. Plusieurs modèles de traduction sont construits pour traduire des mots anglais en uni-grammes, bi-grammes et mots chinois avec un corpus parallèle. Une requête en anglais est ensuite traduite de plusieurs façons, et un score classement est produit avec chaque traduction. Le score final de classement combine tous ces types de traduction. Nous considérons la dépendance entre les termes en utilisant la théorie d’évidence de Dempster-Shafer. Une occurrence d'un fragment de texte (de plusieurs mots) dans un document est considérée comme représentant l'ensemble de tous les termes constituants. La probabilité est assignée à un tel ensemble de termes plutôt qu’a chaque terme individuel. Au moment d’évaluation de requête, cette probabilité est redistribuée aux termes de la requête si ces derniers sont différents. Cette approche nous permet d'intégrer les relations de dépendance entre les termes. Nous proposons un modèle discriminant pour intégrer les différentes types de dépendance selon leur force et leur utilité pour la RI. Notamment, nous considérons la dépendance de contiguïté et de cooccurrence à de différentes distances, c’est-à-dire les bi-grammes et les paires de termes dans une fenêtre de 2, 4, 8 et 16 mots. Le poids d’un bi-gramme ou d’une paire de termes dépendants est déterminé selon un ensemble des caractères, en utilisant la régression SVM. Toutes les méthodes proposées sont évaluées sur plusieurs collections en anglais et/ou chinois, et les résultats expérimentaux montrent que ces méthodes produisent des améliorations substantielles sur l'état de l'art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis Entitled Internet Utilization and Academic Activities of Faculty Members in the Universities of kerala: an analytical study. Today, scientific research is throwing up new discoveries, inventions and vistas by the hour. We are witnessing a veritable knowledge explosion. It is important for members of university faculty members to keep abreast of it for giving up-t-date information to their students about the new development in the subject of their study. The internet is an invaluable tool for achieving it. Most of the universities have sufficient internet facility, but the accessibility to all the faculty members is not adequate. University Libraries also provides standard supplementary service in the internet area. This study indicates differential level of awareness and utilization of the internet services by the faculty members in the areas of teaching, research and publication. However the overall impression is that the awareness and utilization is inadequate. This point to the urgent need to devise programs and schemes to promote internet utilization among the faculty members. The suggestions indicate the key areas that deserve attention by policy makers and administrators. Thanks to the internet, every new development in every field of study is just a click away for faculty members, research scholars and students.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High energy materials are essential ingredients in both rocket and explosive formulations. These can be vulnerable due to maltreatment. During gulf war, several catastrophic accidents have been reported from their own payload munitions. The role of energetic binders here was to wrap the explosive formulations to convert it into insensitive munitions. With the aid of energetic binders, the explosive charges are not only protected from tragic accidents due to fire, bullet impact, adjacent detonation, unplanned transportation, but also form total energy output presumption. The use of energetic binders in rocket propellants and explosive charges has been increased after the Second World War. Inert binders in combination with energetic materials, performed well as binders but they diluted the final formulation. Obviously the total energy output was reduced. Currently, the research in the field of energetic polymers is an emerging area, since it plays crucial role in insensitive munitions. The present work emphasises on the synthesis and characterization of oxetanes, oxiranes and polyphosphazene based energetic polymers. The thesis is structured into six chapters. First part of chapter 1 deals with brief history of energetic polymers. The second part describes a brief literature survey of energetic polymers based on oxetanes and oxiranes. Third and fourth parts deal with energetic plasticizers and energetic polyphosphazenes. Finally, the fifth part deals with the various characterization techniques adopted for the current study and sixth part includes objectives of the present work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comets are the spectacular objects in the night sky since the dawn of mankind. Due to their giant apparitions and enigmatic behavior, followed by coincidental calamities, they were termed as notorious and called as `bad omens'. With a systematic study of these objects modern scienti c community understood that these objects are part of our solar system. Comets are believed to be remnant bodies of at the end of evolution of solar system and possess the material of solar nebula. Hence, these are considered as most pristine objects which can provide the information about the conditions of solar nebula. These are small bodies of our solar system, with a typical size of about a kilometer to a few tens of kilometers orbiting the Sun in highly elliptical orbits. The solid body of a comet is nucleus which is a conglomerated mixture of water ice, dust and some other gases. When the cometary nucleus advances towards the Sun in its orbit the ices sublimates and produces the gaseous envelope around the nucleus which is called coma. The gravity of cometary nucleus is very small and hence can not in uence the motion of gases in the cometary coma. Though the cometary nucleus is a few kilometers in size they can produce a transient, extensive, and expanding atmosphere with size several orders of magnitude larger in space. By ejecting gas and dust into space comets became the most active members of the solar system. The solar radiation and the solar wind in uences the motion of dust and ions and produces dust and ion tails, respectively. Comets have been observed in di erent spectral regions from rocket, ground and space borne optical instruments. The observed emission intensities are used to quantify the chemical abundances of di erent species in the comets. The study of various physical and chemical processes that govern these emissions is essential before estimating chemical abundances in the coma. Cameron band emission of CO molecule has been used to derive CO2 abundance in the comets based on the assumption that photodissociation of CO2 mainly produces these emissions. Similarly, the atomic oxygen visible emissions have been used to probe H2O in the cometary coma. The observed green ([OI] 5577 A) to red-doublet emission ([OI] 6300 and 6364 A) ratio has been used to con rm H2O as the parent species of these emissions. In this thesis a model is developed to understand the photochemistry of these emissions and applied to several comets. The model calculated emission intensities are compared with the observations done by space borne instruments like International Ultraviolet Explorer (IUE) and Hubble Space Telescope (HST) and also by various ground based telescopes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tourism being a smokeless industry is now a multi-billion, multi-sectoral and multi-dimensional activity in the world. Twenty first century tourism has reached up to space when a Russian rocket carried the space vehicle of Dennis Tito, an American businessman and the world’s first space tourist, to the space station. Time is not too far to carry tourists to moon and other planets in specially launched vehicles. Tourism is being considered as an agent of social change bridging gaps among nations, regions and people and helping them to open up. It is a promoter of development-material and spiritual both at macro and micro level. The General Assembly of the United Nations, in designating the year 1967 as ‘International Tourism Year’ recognized the importance of international travel as a means of fostering understanding among people, and giving them a knowledge of the rich heritage of the past civilizations, a better appreciation of the values of different cultures, thus contributing to the strengthening of world peace. It adopted the theme “Tourism-Passport to peace”. Our veteran national leader and the first Prime Minister of India, Jawaharlal Nehru had said” welcome a tourist and send back a friend” which indicates the need for extending friendly hospitality to the in bound tourists. Modern transportation has removed the obstacles of distance enabling people to appreciate each other engage in the exchange of ideas and commerce. Tourism can help overcome real prejudices and foster bonds. Tourism can be a real force of world peace. Considering the vast and varied potential of tourism in the state and its impact on the economic, social and cultural environment of the state, a detailed study is found to be relevant and imperative

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Salient pole brushless alternators coupled to IC engines are extensively used as stand-by power supply units for meeting in- dustrial power demands. Design of such generators demands high power to weight ratio, high e ciency and low cost per KVA out- put. Moreover, the performance characteristics of such machines like voltage regulation and short circuit ratio (SCR) are critical when these machines are put into parallel operation and alterna- tors for critical applications like defence and aerospace demand very low harmonic content in the output voltage. While designing such alternators, accurate prediction of machine characteristics, including total harmonic distortion (THD) is essential to mini- mize development cost and time. Total harmonic distortion in the output voltage of alternators should be as low as possible especially when powering very sophis- ticated and critical applications. The output voltage waveform of a practical AC generator is replica of the space distribution of the ux density in the air gap and several factors such as shape of the rotor pole face, core saturation, slotting and style of coil disposition make the realization of a sinusoidal air gap ux wave impossible. These ux harmonics introduce undesirable e ects on the alternator performance like high neutral current due to triplen harmonics, voltage distortion, noise, vibration, excessive heating and also extra losses resulting in poor e ciency, which in turn necessitate de-rating of the machine especially when connected to non-linear loads. As an important control unit of brushless alternator, the excitation system and its dynamic performance has a direct impact on alternator's stability and reliability. The thesis explores design and implementation of an excitation i system utilizing third harmonic ux in the air gap of brushless al- ternators, using an additional auxiliary winding, wound for 1=3rd pole pitch, embedded into the stator slots and electrically iso- lated from the main winding. In the third harmonic excitation system, the combined e ect of two auxiliary windings, one with 2=3rd pitch and another third harmonic winding with 1=3rd pitch, are used to ensure good voltage regulation without an electronic automatic voltage regulator (AVR) and also reduces the total harmonic content in the output voltage, cost e ectively. The design of the third harmonic winding by analytic methods demands accurate calculation of third harmonic ux density in the air gap of the machine. However, precise estimation of the amplitude of third harmonic ux in the air gap of a machine by conventional design procedures is di cult due to complex geome- try of the machine and non-linear characteristics of the magnetic materials. As such, prediction of the eld parameters by conven- tional design methods is unreliable and hence virtual prototyping of the machine is done to enable accurate design of the third har- monic excitation system. In the design and development cycle of electrical machines, it is recognized that the use of analytical and experimental methods followed by expensive and in exible prototyping is time consum- ing and no longer cost e ective. Due to advancements in com- putational capabilities over recent years, nite element method (FEM) based virtual prototyping has become an attractive al- ternative to well established semi-analytical and empirical design methods as well as to the still popular trial and error approach followed by the costly and time consuming prototyping. Hence, by virtually prototyping the alternator using FEM, the important performance characteristics of the machine are predicted. Design of third harmonic excitation system is done with the help of results obtained from virtual prototype of the machine. Third harmonic excitation (THE) system is implemented in a 45 KVA ii experimental machine and experiments are conducted to validate the simulation results. Simulation and experimental results show that by utilizing third harmonic ux in the air gap of the ma- chine for excitation purposes during loaded conditions, triplen harmonic content in the output phase voltage is signi cantly re- duced. The prototype machine with third harmonic excitation system designed and developed based on FEM analysis proved to be economical due to its simplicity and has the added advan- tage of reduced harmonics in the output phase voltage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit werden Modellbildungsverfahren zur echtzeitfähigen Simulation wichtiger Schadstoffkomponenten im Abgasstrom von Verbrennungsmotoren vorgestellt. Es wird ein ganzheitlicher Entwicklungsablauf dargestellt, dessen einzelne Schritte, beginnend bei der Ver-suchsplanung über die Erstellung einer geeigneten Modellstruktur bis hin zur Modellvalidierung, detailliert beschrieben werden. Diese Methoden werden zur Nachbildung der dynamischen Emissi-onsverläufe relevanter Schadstoffe des Ottomotors angewendet. Die abgeleiteten Emissionsmodelle dienen zusammen mit einer Gesamtmotorsimulation zur Optimierung von Betriebstrategien in Hybridfahrzeugen. Im ersten Abschnitt der Arbeit wird eine systematische Vorgehensweise zur Planung und Erstellung von komplexen, dynamischen und echtzeitfähigen Modellstrukturen aufgezeigt. Es beginnt mit einer physikalisch motivierten Strukturierung, die eine geeignete Unterteilung eines Prozessmodells in einzelne überschaubare Elemente vorsieht. Diese Teilmodelle werden dann, jeweils ausgehend von einem möglichst einfachen nominalen Modellkern, schrittweise erweitert und ermöglichen zum Abschluss eine robuste Nachbildung auch komplexen, dynamischen Verhaltens bei hinreichender Genauigkeit. Da einige Teilmodelle als neuronale Netze realisiert werden, wurde eigens ein Verfah-ren zur sogenannten diskreten evidenten Interpolation (DEI) entwickelt, das beim Training einge-setzt, und bei minimaler Messdatenanzahl ein plausibles, also evidentes Verhalten experimenteller Modelle sicherstellen kann. Zum Abgleich der einzelnen Teilmodelle wurden statistische Versuchs-pläne erstellt, die sowohl mit klassischen DoE-Methoden als auch mittels einer iterativen Versuchs-planung (iDoE ) generiert wurden. Im zweiten Teil der Arbeit werden, nach Ermittlung der wichtigsten Einflussparameter, die Model-strukturen zur Nachbildung dynamischer Emissionsverläufe ausgewählter Abgaskomponenten vor-gestellt, wie unverbrannte Kohlenwasserstoffe (HC), Stickstoffmonoxid (NO) sowie Kohlenmono-xid (CO). Die vorgestellten Simulationsmodelle bilden die Schadstoffkonzentrationen eines Ver-brennungsmotors im Kaltstart sowie in der anschließenden Warmlaufphase in Echtzeit nach. Im Vergleich zur obligatorischen Nachbildung des stationären Verhaltens wird hier auch das dynami-sche Verhalten des Verbrennungsmotors in transienten Betriebsphasen ausreichend korrekt darge-stellt. Eine konsequente Anwendung der im ersten Teil der Arbeit vorgestellten Methodik erlaubt, trotz einer Vielzahl von Prozesseinflussgrößen, auch hier eine hohe Simulationsqualität und Ro-bustheit. Die Modelle der Schadstoffemissionen, eingebettet in das dynamische Gesamtmodell eines Ver-brennungsmotors, werden zur Ableitung einer optimalen Betriebsstrategie im Hybridfahrzeug ein-gesetzt. Zur Lösung solcher Optimierungsaufgaben bieten sich modellbasierte Verfahren in beson-derer Weise an, wobei insbesondere unter Verwendung dynamischer als auch kaltstartfähiger Mo-delle und der damit verbundenen Realitätsnähe eine hohe Ausgabequalität erreicht werden kann.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The surge in the urban population evident in most developing countries is a worldwide phenomenon, and often the result of drought, conflicts, poverty and the lack of education opportunities. In parallel with the growth of the cities is the growing need for food which leads to the burgeoning expansion of urban and peri-urban agriculture (UPA). In this context, urban agriculture (UA) contributes significantly to supplying local markets with both vegetable and animal produce. As an income generating activity, UA also contributes to the livelihoods of poor urban dwellers. In order to evaluate the nutrient status of urban soils in relation to garden management, this study assessed nutrient fluxes (inputs and outputs) in gardens on urban Gerif soils on the banks of the River Nile in Khartoum, the capital city of Sudan. To achieve this objective, a preliminary baseline survey was carried out to describe the structure of the existing garden systems. In cooperation with the author of another PhD thesis (Ms. Ishtiag Abdalla), alternative uses of cow dung in brick making kilns in urban Khartoum were assessed; and the socio-economic criteria of the brick kiln owners or agents, economical and plant nutritional value of animal dung and the gaseous emission related to brick making activities were assessed. A total of 40 household heads were interviewed using a semi-structured questionnaire to collect information on demographic, socio-economic and migratory characteristics of the household members, the gardening systems used and the problems encountered in urban gardening. Based on the results of this survey, gardens were divided into three groups: mixed vegetable-fodder gardens, mixed vegetable-subsistence livestock gardens and pure vegetable gardens. The results revealed that UA is the exclusive domain of men, 80% of them non-native to Khartoum. The harvested produce in all gardens was market oriented and represented the main source of income for 83% of the gardeners. Fast growing leafy vegetables such as Jew’s mallow (Corchorous olitorius L.), purslane (Portulaca oleracea L.) and rocket (Eruca sativa Mill.) were the dominant cultivated species. Most of the gardens (95%) were continuously cultivated throughout the year without any fallow period, unless they were flooded. Gardeners were not generally aware of the importance of crop diversity, which may help them overcome the strongly fluctuating market prices for their produce and thereby strengthen the contributions of UA to the overall productivity of the city. To measure nutrient fluxes, four gardens were selected and their nutrients inputs and outputs flows were monitored. In each garden, all plots were monitored for quantification of nutrient inputs and outputs. To determine soil chemical fertility parameters in each of the studied gardens, soil samples were taken from three selected plots at the beginning of the study in October 2007 (gardens L1, L2 and H1) and in April 2008 (garden H2) and at the end of the study period in March 2010. Additional soil sampling occurred in May 2009 to assess changes in the soil nutrient status after the River Nile flood of 2008 had receded. Samples of rain and irrigation water (river and well-water) were analyzed for nitrogen (N), phosphorus (P), potassium (K) and carbon (C) content to determine their nutrient inputs. Catchment traps were installed to quantify the sediment yield from the River Nile flood. To quantify the nutrient inputs of sediments, samples were analyzed for N, P, K and organic carbon (Corg) content, cation exchange capacity (CEC) and the particle size distribution. The total nutrient inputs were calculated by multiplying the sediment nutrient content by total sediment deposits on individual gardens. Nutrient output in the form of harvested yield was quantified at harvest of each crop. Plant samples from each field were dried, and analyzed for their N, P, K and Corg content. Cumulative leaching losses of mineral N and P were estimated in a single plot in garden L1 from December 1st 2008 to July 1st 2009 using 12 ion exchange resins cartridges. Nutrients were extracted and analyzed for nitrate (NO3--N), ammonium (NH4+-N) and phosphate PO4-3-P. Changes in soil nutrient balance were assessed as inputs minus outputs. The results showed that across gardens, soil N and P concentrations increased from 2007 to 2009, while particle size distribution remained unchanged. Sediment loads and their respective contents of N, P and Corg decreased significantly (P < 0.05) from the gardens of the downstream lowlands (L1 and L2) to the gardens of the upstream highlands (H1 and H2). No significant difference was found in K deposits. None of the gardens received organic fertilizers and the only mineral fertilizer applied was urea (46-0-0). This equaled 29, 30, 54, and 67% of total N inputs to gardens L1, L2, H1, and H2, respectively. Sediment deposits of the River Nile floods contributed on average 67, 94, 6 and 42% to the total N, P, K and C inputs in lowland gardens and 33, 86, 4 and 37% of total N, P, K and C inputs in highland gardens. Irrigation water and rainfall contributed substantially to K inputs representing 96, 92, 94 and 96% of total K influxes in garden L1, L2, H1 and H2, respectively. Following the same order, total annual DM yields in the gardens were 26, 18, 16 and 1.8 t ha-1. Annual leaching losses were estimated to be 0.02 kg NH4+-N ha-1 (SE = 0.004), 0.03 kg NO3--N ha-1 (SE = 0.002) and 0.005 kg PO4-3-P ha-1 (SE = 0.0007). Differences between nutrient inputs and outputs indicated negative nutrient balances for P and K and positive balances of N and C for all gardens. The negative balances in P and K call for adoptions of new agricultural techniques such as regular manure additions or mulching which may enhance the soil organic matter status. A quantification of fluxes not measured in our study such as N2-fixation, dry deposition and gaseous emissions of C and N would be necessary to comprehensively assess the sustainability of these intensive gardening systems. The second part of the survey dealt with the brick making kilns. A total of 50 brick kiln owners/or agents were interviewed from July to August 2009, using a semi-structured questionnaire. The data collected included general information such as age, family size, education, land ownership, number of kilns managed and/or owned, number of months that kilns were in operation, quantity of inputs (cow dung and fuel wood) used, prices of inputs and products across the production season. Information related to the share value of the land on which the kilns were built and annual income for urban farmers and annual returns from dung for the animal raisers was also collected. Using descriptive statistics, budget calculation and Gini coefficient, the results indicated that renting the land to brick making kilns yields a 5-fold higher return than the rent for agriculture. Gini coefficient showed that the kiln owners had a more equal income distribution compared to farmers. To estimate emission of greenhouse gases (GHGs) and losses of N, P, K, Corg and DM from cow dung when used in brick making, samples of cow dung (loose and compacted) were collected from different kilns and analyzed for their N, P, K and Corg content. The procedure modified by the Intergovernmental Panel on Climate Change (IPCC, 1994) was used to estimate the gaseous emissions of cow dung and fuel wood. The amount of deforested wood was estimated according to the default values for wood density given by Dixon et al. (1991) and the expansion ratio for branches and small trees given by Brown et al. (1989). The data showed the monetary value of added N and P from cow dung was lower than for mineral fertilizers. Annual consumption of compacted dung (381 t DM) as biomass fuel by far exceeded the consumption of fuel wood (36 t DM). Gaseous emissions from cow dung and fuel wood were dominated by CO2, CO and CH4. Considering that Gerif land in urban Khartoum supports a multifunctional land use system, efficient use of natural resources (forest, dung, land and water) will enhance the sustainability of the UA and brick making activities. Adoption of new kilns with higher energy efficiency will reduce the amount of biomass fuels (cow dung and wood) used the amount of GHGs emitted and the threat to the few remaining forests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Lean Aircraft Initiative began in the summer of 1992 as a “quick look” into the feasibility of applying manufacturing principles that had been pioneered in the automobile industry, most notably the Toyota Production System, to the U.S. defense aircraft industry. Once it was established that “lean principles” (the term coined to describe the new paradigm in automobile manufacturing) were indeed applicable to aircraft manufacturing as well, the Initiative was broadened to include other segments of the defense aerospace industry. These consisted of electronics/avionics, engines, electro-mechanical systems, missiles, and space systems manufacturers. In early 1993, a formal framework was established in which 21 defense firms and the Air Force formed a consortium to support and participate in the Initiative at M.I.T.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectiu dels Serveis Intel·ligents d’Atenció Ciutadana (SAC) és donar resposta a les necessitats d'informació dels ciutadans sobre els serveis i les actuacions del municipi i, per extensió, del conjunt del serveis d'interès ciutadà. Des que l’ iSAC s’ha posat en funcionament, periòdicament s’analitzen les consultes que es fan en el sistema i el grau de satisfacció que la ciutadania té d’aquest servei. Tot i que en general les valoracions són satisfactòries s’ha observat que actualment aquest sistema té un buit, hi ha un ampli ventall de respostes que, de moment, l’iSAC no és capaç de resoldre, i possiblement el 010, el call center del servei d’atenció ciutadana, tampoc. Algunes de les cerques realitzades marxen molt de l’àmbit municipal i és l’experiència de la mateixa ciutadania la que pot oferir un millor resultat. És per aquest motiu que ha sorgit la necessitat de crear wikiSAC. Eina que te com a principals objectius que: poder crear, modificar i eliminar el contingut d’una pàgina de forma interactiva de manera fàcil i ràpida a través d’un navegador web; controlar els continguts ofensius i malintencionats; conservar un historial de canvis; incentivar la participació ciutadana i aconseguir que aquest sigui un lloc on els ciutadans preguntin, suggereixin i opinin sobre temes relacionats amb el seu municipi i aconseguir que els ciutadans es sentin més integrats amb el funcionament de l’administració, col∙laborant en les tasques d’informació i atenció ciutadana

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A la física o a l’enginyeria es defineix com fregament o fricció a la resistència que s'oposa a la rotació o al lliscament d'un cos sobre un altre, o també a la força que apareix en la superfície de contacte de dos cossos quan s'intenta lliscar un sobre un altre. El fregament ha estat fins avui dia un gran problema físic, això és degut a que genera imperfeccions, especialment microscòpiques, entre les superfícies en contacte. Aleshores és quan apareix la tribologia. La tribologia és la ciència que estudia la fricció, el desgast i la lubricació de superfícies en contacte. El lliscament entre superfícies sòlides es caracteritza generalment per un alt coeficient de fricció i un gran desgast a causa de les propietats específiques de les superfícies. La lubricació consisteix en la introducció d'una capa intermèdia d'un material aliè entre les superfícies en moviment. En aquest projecte hem intentat aplicar tècniques i mètodes de control els quals puguin arribar a millorar i perfeccionar els sistemes de mesura i control a nivell industrial i/o particular. En el nostre cas ha estat l’estudi dels olis lubricants usats en motors de combustió interna de variats vehicles com motocicletes, automòbils, camions o vaixells. Hem introduït una millora d’automatització mitjançant un circuit pneumàtic al captador de partícules, hi hem introduït una tècnica per captar partícules a partir de filtres de membrana, hem estudiat les mostres i llurs ferrogrames i membranes amb la finalitat de detectar les anomalies dels motors. L’objectiu del treball és l’estudi del desgast originat en motors de combustió interna, majoritàriament, Dièsel de camions, automòbils i vaixells. Això, comprèn la captació de partícules en ferrografies, la seva observació i anàlisi en microscopi, la seva classificació, comparació i la detecció d’anomalies en els motors. Per altra banda, també s’aprofundirà en les tècniques d’anàlisi, la lubricació i manteniment dels motors i el nou disseny i validació d’un captador de partícules automatitzat