991 resultados para Computer input-outpus equipment.
Resumo:
We report on an elementary course in ordinary differential equations (odes) for students in engineering sciences. The course is also intended to become a self-study package for odes and is is based on several interactive computer lessons using REDUCE and MATHEMATICA . The aim of the course is not to do Computer Algebra (CA) by example or to use it for doing classroom examples. The aim ist to teach and to learn mathematics by using CA-systems.
Resumo:
The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.
Resumo:
Die vorliegende Arbeit entstand während meiner Zeit als wissenschaftlicher Mitarbeiter im Fachgebiet Technische Informatik an der Universität Kassel. Im Rahmen dieser Arbeit werden der Entwurf und die Implementierung eines Cluster-basierten verteilten Szenengraphen gezeigt. Bei der Implementierung des verteilten Szenengraphen wurde von der Entwicklung eines eigenen Szenengraphen abgesehen. Stattdessen wurde ein bereits vorhandener Szenengraph namens OpenSceneGraph als Basis für die Entwicklung des verteilten Szenengraphen verwendet. Im Rahmen dieser Arbeit wurde eine Clusterunterstützung in den vorliegenden OpenSceneGraph integriert. Bei der Erweiterung des OpenSceneGraphs wurde besonders darauf geachtet den vorliegenden Szenengraphen möglichst nicht zu verändern. Zusätzlich wurde nach Möglichkeit auf die Verwendung und Integration externer Clusterbasierten Softwarepakete verzichtet. Für die Verteilung des OpenSceneGraphs wurde auf Basis von Sockets eine eigene Kommunikationsschicht entwickelt und in den OpenSceneGraph integriert. Diese Kommunikationsschicht wurde verwendet um Sort-First- und Sort-Last-basierte Visualisierung dem OpenSceneGraph zur Verfügung zu stellen. Durch die Erweiterung des OpenScenGraphs um die Cluster-Unterstützung wurde eine Ansteuerung beliebiger Projektionssysteme wie z.B. einer CAVE ermöglicht. Für die Ansteuerung einer CAVE wurden mittels VRPN diverse Eingabegeräte sowie das Tracking in den OpenSceneGraph integriert. Durch die Anbindung der Geräte über VRPN können diese Eingabegeräte auch bei den anderen Cluster-Betriebsarten wie z.B. einer segmentierten Anzeige verwendet werden. Die Verteilung der Daten auf den Cluster wurde von dem Kern des OpenSceneGraphs separat gehalten. Damit kann eine beliebige OpenSceneGraph-basierte Anwendung jederzeit und ohne aufwendige Modifikationen auf einem Cluster ausgeführt werden. Dadurch ist der Anwender in seiner Applikationsentwicklung nicht behindert worden und muss nicht zwischen Cluster-basierten und Standalone-Anwendungen unterscheiden.
Resumo:
The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.
Resumo:
An improved understanding of soil organic carbon (Corg) dynamics in interaction with the mechanisms of soil structure formation is important in terms of sustainable agriculture and reduction of environmental costs of agricultural ecosystems. However, information on physical and chemical processes influencing formation and stabilization of water stable aggregates in association with Corg sequestration is scarce. Long term soil experiments are important in evaluating open questions about management induced effects on soil Corg dynamics in interaction with soil structure formation. The objectives of the present thesis were: (i) to determine the long term impacts of different tillage treatments on the interaction between macro aggregation (>250 µm) and light fraction (LF) distribution and on C sequestration in plots differing in soil texture and climatic conditions. (ii) to determine the impact of different tillage treatments on temporal changes in the size distribution of water stable aggregates and on macro aggregate turnover. (iii) to evaluate the macro aggregate rebuilding in soils with varying initial Corg contents, organic matter (OM) amendments and clay contents in a short term incubation experiment. Soil samples were taken in 0-5 cm, 5-25 cm and 25-40 cm depth from up to four commercially used fields located in arable loess regions of eastern and southern Germany after 18-25 years of different tillage treatments with almost identical experimental setups per site. At each site, one large field with spatially homogenous soil properties was divided into three plots. One of the following three tillage treatments was carried in each plot: (i) Conventional tillage (CT) with annual mouldboard ploughing to 25-30 cm (ii) mulch tillage (MT) with a cultivator or disc harrow 10-15 cm deep, and (iii) no tillage (NT) with direct drilling. The crop rotation at each site consisted of sugar beet (Beta vulgaris L.) - winter wheat (Triticum aestivum L.) - winter wheat. Crop residues were left on the field and crop management was carried out following the regional standards of agricultural practice. To investigate the above mentioned research objectives, three experiments were conducted: Experiment (i) was performed with soils sampled from four sites in April 2010 (wheat stand). Experiment (ii) was conducted with soils sampled from three sites in April 2010, September 2011 (after harvest or sugar beet stand), November 2011 (after tillage) and April 2012 (bare soil or wheat stand). An incubation study (experiment (iii)) was performed with soil sampled from one site in April 2010. Based on the aforementioned research objectives and experiments the main findings were: (i) Consistent results were found between the four long term tillage fields, varying in texture and climatic conditions. Correlation analysis of the yields of macro aggregate against the yields of free LF ( ≤1.8 g cm-3) and occluded LF, respectively, suggested that the effective litter translocation in higher soil depths and higher litter input under CT and MT compensated in the long term the higher physical impact by tillage equipment than under NT. The Corg stocks (kg Corg m−2) in 522 kg soil, based on the equivalent soil mass approach (CT: 0–40 cm, MT: 0–38 cm, NT: 0–36 cm) increased in the order CT (5.2) = NT (5.2) < MT (5.7). Significantly (p ≤ 0.05) highest Corg stocks under MT were probably a result of high crop yields in combination with reduced physical tillage impact and effective litter incorporation, resulting in a Corg sequestration rate of 31 g C-2 m-2 yr-1. (ii) Significantly higher yields of macro aggregates (g kg-2 soil) under NT (732-777) and MT (680-726) than under CT (542-631) were generally restricted to the 0-5 cm sampling depth for all sampling dates. Temporal changes on aggregate size distribution were only small and no tillage induced net effect was detectable. Thus, we assume that the physical impact by tillage equipment was only small or the impact was compensated by a higher soil mixing and effective litter translocation into higher soil depths under CT, which probably resulted in a high re aggregation. (iii) The short term incubation study showed that macro aggregate yields (g kg-2 soil) were higher after 28 days in soils receiving OM (121.4-363.0) than in the control soils (22.0-52.0), accompanied by higher contents of microbial biomass carbon and ergosterol. Highest soil respiration rates after OM amendments within the first three days of incubation indicated that macro aggregate formation is a fast process. Most of the rebuilt macro aggregates were formed within the first seven days of incubation (42-75%). Nevertheless, it was ongoing throughout the entire 28 days of incubation, which was indicated by higher soil respiration rates at the end of the incubation period in OM amended soils than in the control soils. At the same time, decreasing carbon contents within macro aggregates over time indicated that newly occluded OM within the rebuilt macro aggregates served as Corg source for microbial biomass. The different clay contents played only minor role in macro aggregate formation under the particular conditions of the incubation study. Overall, no net changes on macro aggregation were identified in the short term. Furthermore, no indications for an effective Corg sequestration on the long term under NT in comparison to CT were found. The interaction of soil disturbance, litter distribution and the fast re aggregation suggested that a distinct steady state per tillage treatment in terms of soil aggregation was established. However, continuous application of MT with a combination of reduced physical tillage impact and effective litter incorporation may offer some potential in improving the soil structure and may therefore prevent incorporated LF from rapid decomposition and result in a higher C sequestration on the long term.
Resumo:
Gegenstand der vorliegenden Arbeit ist die Analyse verschiedener Formalismen zur Berechnung binärer Wortrelationen. Dabei ist die Grundlage aller hier ausgeführten Betrachtungen das Modell der Restart-Automaten, welches 1995 von Jancar et. al. eingeführt wurde. Zum einen wird das bereits für Restart-Automaten bekannte Konzept der input/output- und proper-Relationen weiterführend untersucht, sowie auf Systeme von zwei parallel arbeitenden und miteinander kommunizierenden Restart-Automaten (PC-Systeme) erweitert. Zum anderen wird eine Variante der Restart-Automaten eingeführt, die sich an klassischen Automatenmodellen zur Berechnung von Relationen orientiert. Mit Hilfe dieser Mechanismen kann gezeigt werden, dass einige Klassen, die durch input/output- und proper-Relationen von Restart Automaten definiert werden, mit den traditionellen Relationsklassen der Rationalen Relationen und der Pushdown-Relationen übereinstimmen. Weiterhin stellt sich heraus, dass das Konzept der parallel kommunizierenden Automaten äußerst mächtig ist, da bereits die Klasse der proper-Relationen von monotonen PC-Systemen alle berechenbaren Relationen umfasst. Der Haupteil der Arbeit beschäftigt sich mit den so genannten Restart-Transducern, welche um eine Ausgabefunktion erweiterte Restart-Automaten sind. Es zeigt sich, dass sich insbesondere dieses Modell mit seinen verschiedenen Erweiterungen und Einschränkungen dazu eignet, eine umfassende Hierarchie von Relationsklassen zu etablieren. In erster Linie seien hier die verschiedenen Typen von monotonen Restart-Transducern erwähnt, mit deren Hilfe viele interessante neue und bekannte Relationsklassen innerhalb der längenbeschränkten Pushdown-Relationen charakterisiert werden. Abschließend wird, im Kontrast zu den vorhergehenden Modellen, das nicht auf Restart-Automaten basierende Konzept des Übersetzens durch Beobachtung ("Transducing by Observing") zur Relationsberechnung eingeführt. Dieser, den Restart-Transducern nicht unähnliche Mechanismus, wird im weitesten Sinne dazu genutzt, einen anderen Blickwinkel auf die von Restart-Transducern definierten Relationen einzunehmen, sowie eine obere Schranke für die Berechnungskraft der Restart-Transducer zu gewinnen.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Despite its young history, Computer Science Education has seen a number of "revolutions". Being a veteran in the field, the author reflects on the many changes he has seen in computing and its teaching. The intent of this personal collection is to point out that most revolutions came unforeseen and that many of the new learning initiatives, despite high financial input, ultimately failed. The author then considers the current revolution (MOOC, inverted lectures, peer instruction, game design) and, based on the lessons learned earlier, argues why video recording is so successful. Given the fact that this is the decade we lost print (papers, printed books, book shops, libraries), the author then conjectures that the impact of the Internet will make this revolution different from previous ones in that most of the changes are irreversible. As a consequence he warns against storming ahead blindly and suggests to conserve - while it is still possible - valuable components of what might soon be called the antebellum age of education.
Resumo:
In der gesamten Hochschullandschaft begleiten eLearning-Szenarien organisatorische Erneuerungsprozesse und stellen damit ein vielversprechendes Instrument zur Unterstützung und Verbesserung der klassischen Präsenzlehre dar. Davon ausgehend wurde von 2010 bis 2011 das Kasseler Sportspiel-Modell um die integrative Vermittlung der Einkontakt-Rückschlagspiele erweitert (Heyer, Albert, Scheid & Blömeke-Rumpf, 2011) und in einen modularisierten eLearning-Content, bestehend aus insgesamt 4 Modulen (17 Lernkurse, 171 Kursseiten, 73 Grafiken, 73 Videos, 38 Lernkontrollfragen), eingebunden. Dieser Content wurde im Rahmen einer Evaluationsstudie in Blended Learning Seminaren, welche die didaktischen Vorteile von Online- und Präsenzphasen zu einer Seminarform vereinen (Treumann, Ganguin & Arens, 2012), vergleichend zur klassischen Präsenzlehre im Sportstudium betrachtet. Die Studie gliedert sich in insgesamt drei Phasen: 1.) Pilotstudie am IfSS in Kassel (WS 2011/12; N=17, Lehramt), 2.) Hauptuntersuchung I am IfSS in Kassel (SS 2012; N=67, Lehramt) und 3.) Hauptuntersuchung II am IfS in Frankfurt a. M. (WS 2012/13; N=112, BA). Mittels varianzanalytischer Untersuchungsverfahren erfasst die Studie auf drei unterschiedlichen Qualitätsebenen folgende Aspekte der Lehr-Lernforschung: 1.) Ebene der Inputqualität: Bewertung der Seminarform (BS), 2.) Ebene der Prozessqualität: Motivation (SELLMO-ST), Lernstrategien (LIST) und computerbezogene Einstellung (FIDEC), 3.) Ebene der Outcomequalität: Lernleistung (Abschlusstest und Transferaufgabe). In der vergleichenden Betrachtung der beiden Hauptuntersuchungen erfolgt eine Gegenüberstellung von je einem Präsenzseminar zu zwei unterschiedlichen Varianten von Blended Learning Seminaren (BL-1, BL-2). Während der Online-Phasen bearbeiten die Sportstudierenden in BL-1 die Module in Lerngruppen. Die Teilnehmer in BL-2 führen in diesen Phasen zusätzlich persönliche Lerntagebücher. Dies soll zu einer vergleichsweise intensiveren Auseinandersetzung mit den Inhalten der Lernkurse sowie dem eigenen Lernprozess auf kognitiver und metakognitiver Ebene anregen (Hübner, Nückles & Renkl, 2007) und folglich zu besseren Ergebnissen auf den drei Qualitätsebenen führen. Die Ergebnisse der beiden Hauptuntersuchungen zeigen in der direkten, standortbezogenen Gegenüberstellung aller drei Seminarformen überwiegend keine statistisch signifikanten Unterschiede. Der erwartete positive Effekt durch die Einführung des Lerntagebuchs bleibt ebenfalls aus. Im standortübergreifenden Vergleich der Blended-Learning-Seminare ist bemerkenswert, dass die Probanden aus Frankfurt gegenüber ihrer Seminarform eine tendenziell kritischere Haltung einnehmen, was möglicherweise mit den vorherrschenden, unterschiedlichen Studiengängen – Lehramt und BA – korrespondiert. Zusammenfassend lässt sich somit für den untersuchten Bereich der Rückschlagspielvermittlung festhalten, dass Blended-Learning-Seminare eine qualitativ gleichwertige Alternative zur klassischen Präsenzlehre im Sportstudium darstellen.
Resumo:
Methods are developed for predicting vibration response characteristics of systems which change configuration during operation. A cartesian robot, an example of such a position-dependent system, served as a test case for these methods and was studied in detail. The chosen system model was formulated using the technique of Component Mode Synthesis (CMS). The model assumes that he system is slowly varying, and connects the carriages to each other and to the robot structure at the slowly varying connection points. The modal data required for each component is obtained experimentally in order to get a realistic model. The analysis results in prediction of vibrations that are produced by the inertia forces as well as gravity and friction forces which arise when the robot carriages move with some prescribed motion. Computer simulations and experimental determinations are conducted in order to calculate the vibrations at the robot end-effector. Comparisons are shown to validate the model in two ways: for fixed configuration the mode shapes and natural frequencies are examined, and then for changing configuration the residual vibration at the end of the mode is evaluated. A preliminary study was done on a geometrically nonlinear system which also has position-dependency. The system consisted of a flexible four-bar linkage with elastic input and output shafts. The behavior of the rocker-beam is analyzed for different boundary conditions to show how some limiting cases are obtained. A dimensional analysis leads to an evaluation of the consequences of dynamic similarity on the resulting vibration.
Resumo:
This thesis presents the ideas underlying a computer program that takes as input a schematic of a mechanical or hydraulic power transmission system, plus specifications and a utility function, and returns catalog numbers from predefined catalogs for the optimal selection of components implementing the design. Unlike programs for designing single components or systems, the program provides the designer with a high level "language" in which to compose new designs. It then performs some of the detailed design process. The process of "compilation" is based on a formalization of quantitative inferences about hierarchically organized sets of artifacts and operating conditions. This allows the design compilation without the exhaustive enumeration of alternatives.
Resumo:
This report examines why women pursue careers in computer science and related fields far less frequently than men do. In 1990, only 13% of PhDs in computer science went to women, and only 7.8% of computer science professors were female. Causes include the different ways in which boys and girls are raised, the stereotypes of female engineers, subtle biases that females face, problems resulting from working in predominantly male environments, and sexual biases in language. A theme of the report is that women's underrepresentation is not primarily due to direct discrimination but to subconscious behavior that perpetuates the status quo.
Resumo:
A revolution\0\0\0 in earthmoving, a $100 billion industry, can be achieved with three components: the GPS location system, sensors and computers in bulldozers, and SITE CONTROLLER, a central computer system that maintains design data and directs operations. The first two components are widely available; I built SITE CONTROLLER to complete the triangle and describe it here. SITE CONTROLLER assists civil engineers in the design, estimation, and construction of earthworks, including hazardous waste site remediation. The core of SITE CONTROLLER is a site modelling system that represents existing and prospective terrain shapes, roads, hydrology, etc. Around this core are analysis, simulation, and vehicle control tools. Integrating these modules into one program enables civil engineers and contractors to use a single interface and database throughout the life of a project.
Resumo:
A method is presented for the visual analysis of objects by computer. It is particularly well suited for opaque objects with smoothly curved surfaces. The method extracts information about the object's surface properties, including measures of its specularity, texture, and regularity. It also aids in determining the object's shape. The application of this method to a simple recognition task ??e recognition of fruit ?? discussed. The results on a more complex smoothly curved object, a human face, are also considered.