851 resultados para Computer input-output equipment
Resumo:
This report gives a detailed discussion on the system, algorithms, and techniques that we have applied in order to solve the Web Service Challenges (WSC) of the years 2006 and 2007. These international contests are focused on semantic web service composition. In each challenge of the contests, a repository of web services is given. The input and output parameters of the services in the repository are annotated with semantic concepts. A query to a semantic composition engine contains a set of available input concepts and a set of wanted output concepts. In order to employ an offered service for a requested role, the concepts of the input parameters of the offered operations must be more general than requested (contravariance). In contrast, the concepts of the output parameters of the offered service must be more specific than requested (covariance). The engine should respond to a query by providing a valid composition as fast as possible. We discuss three different methods for web service composition: an uninformed search in form of an IDDFS algorithm, a greedy informed search based on heuristic functions, and a multi-objective genetic algorithm.
Resumo:
In der gesamten Hochschullandschaft begleiten eLearning-Szenarien organisatorische Erneuerungsprozesse und stellen damit ein vielversprechendes Instrument zur Unterstützung und Verbesserung der klassischen Präsenzlehre dar. Davon ausgehend wurde von 2010 bis 2011 das Kasseler Sportspiel-Modell um die integrative Vermittlung der Einkontakt-Rückschlagspiele erweitert (Heyer, Albert, Scheid & Blömeke-Rumpf, 2011) und in einen modularisierten eLearning-Content, bestehend aus insgesamt 4 Modulen (17 Lernkurse, 171 Kursseiten, 73 Grafiken, 73 Videos, 38 Lernkontrollfragen), eingebunden. Dieser Content wurde im Rahmen einer Evaluationsstudie in Blended Learning Seminaren, welche die didaktischen Vorteile von Online- und Präsenzphasen zu einer Seminarform vereinen (Treumann, Ganguin & Arens, 2012), vergleichend zur klassischen Präsenzlehre im Sportstudium betrachtet. Die Studie gliedert sich in insgesamt drei Phasen: 1.) Pilotstudie am IfSS in Kassel (WS 2011/12; N=17, Lehramt), 2.) Hauptuntersuchung I am IfSS in Kassel (SS 2012; N=67, Lehramt) und 3.) Hauptuntersuchung II am IfS in Frankfurt a. M. (WS 2012/13; N=112, BA). Mittels varianzanalytischer Untersuchungsverfahren erfasst die Studie auf drei unterschiedlichen Qualitätsebenen folgende Aspekte der Lehr-Lernforschung: 1.) Ebene der Inputqualität: Bewertung der Seminarform (BS), 2.) Ebene der Prozessqualität: Motivation (SELLMO-ST), Lernstrategien (LIST) und computerbezogene Einstellung (FIDEC), 3.) Ebene der Outcomequalität: Lernleistung (Abschlusstest und Transferaufgabe). In der vergleichenden Betrachtung der beiden Hauptuntersuchungen erfolgt eine Gegenüberstellung von je einem Präsenzseminar zu zwei unterschiedlichen Varianten von Blended Learning Seminaren (BL-1, BL-2). Während der Online-Phasen bearbeiten die Sportstudierenden in BL-1 die Module in Lerngruppen. Die Teilnehmer in BL-2 führen in diesen Phasen zusätzlich persönliche Lerntagebücher. Dies soll zu einer vergleichsweise intensiveren Auseinandersetzung mit den Inhalten der Lernkurse sowie dem eigenen Lernprozess auf kognitiver und metakognitiver Ebene anregen (Hübner, Nückles & Renkl, 2007) und folglich zu besseren Ergebnissen auf den drei Qualitätsebenen führen. Die Ergebnisse der beiden Hauptuntersuchungen zeigen in der direkten, standortbezogenen Gegenüberstellung aller drei Seminarformen überwiegend keine statistisch signifikanten Unterschiede. Der erwartete positive Effekt durch die Einführung des Lerntagebuchs bleibt ebenfalls aus. Im standortübergreifenden Vergleich der Blended-Learning-Seminare ist bemerkenswert, dass die Probanden aus Frankfurt gegenüber ihrer Seminarform eine tendenziell kritischere Haltung einnehmen, was möglicherweise mit den vorherrschenden, unterschiedlichen Studiengängen – Lehramt und BA – korrespondiert. Zusammenfassend lässt sich somit für den untersuchten Bereich der Rückschlagspielvermittlung festhalten, dass Blended-Learning-Seminare eine qualitativ gleichwertige Alternative zur klassischen Präsenzlehre im Sportstudium darstellen.
Resumo:
Methods are developed for predicting vibration response characteristics of systems which change configuration during operation. A cartesian robot, an example of such a position-dependent system, served as a test case for these methods and was studied in detail. The chosen system model was formulated using the technique of Component Mode Synthesis (CMS). The model assumes that he system is slowly varying, and connects the carriages to each other and to the robot structure at the slowly varying connection points. The modal data required for each component is obtained experimentally in order to get a realistic model. The analysis results in prediction of vibrations that are produced by the inertia forces as well as gravity and friction forces which arise when the robot carriages move with some prescribed motion. Computer simulations and experimental determinations are conducted in order to calculate the vibrations at the robot end-effector. Comparisons are shown to validate the model in two ways: for fixed configuration the mode shapes and natural frequencies are examined, and then for changing configuration the residual vibration at the end of the mode is evaluated. A preliminary study was done on a geometrically nonlinear system which also has position-dependency. The system consisted of a flexible four-bar linkage with elastic input and output shafts. The behavior of the rocker-beam is analyzed for different boundary conditions to show how some limiting cases are obtained. A dimensional analysis leads to an evaluation of the consequences of dynamic similarity on the resulting vibration.
Resumo:
Image analysis and graphics synthesis can be achieved with learning techniques using directly image examples without physically-based, 3D models. In our technique: -- the mapping from novel images to a vector of "pose" and "expression" parameters can be learned from a small set of example images using a function approximation technique that we call an analysis network; -- the inverse mapping from input "pose" and "expression" parameters to output images can be synthesized from a small set of example images and used to produce new images using a similar synthesis network. The techniques described here have several applications in computer graphics, special effects, interactive multimedia and very low bandwidth teleconferencing.
Resumo:
We consider the often-studied problem of sorting, for a parallel computer. Given an input array distributed evenly over p processors, the task is to compute the sorted output array, also distributed over the p processors. Many existing algorithms take the approach of approximately load-balancing the output, leaving each processor with Θ(n/p) elements. However, in many cases, approximate load-balancing leads to inefficiencies in both the sorting itself and in further uses of the data after sorting. We provide a deterministic parallel sorting algorithm that uses parallel selection to produce any output distribution exactly, particularly one that is perfectly load-balanced. Furthermore, when using a comparison sort, this algorithm is 1-optimal in both computation and communication. We provide an empirical study that illustrates the efficiency of exact data splitting, and shows an improvement over two sample sort algorithms.
Resumo:
Comprobar la pertinencia de utilizar un input (los materiales que se presentan al alumno) muy controlado, progresivo y en el que se van introduciendo elementos que van un poco más allá de la realización del alumno siguiendo a Krashen. Analizar el efecto que este input tiene en la expresión escrita libre y comunicativa que es el output (lo que el alumno demuestra que ha aprendido) comprobable y analizable, y que a su vez nos dará la medida del intake (lo que el alumno ha asimilado). Analizar las implicaciones teóricas, lingüísticas y psicolingüísticas de este estudio. 47 alumnos, procedentes de distintas partes de España, que siguen una enseñanza a distancia, mayores de 25 años. 44 alumnos de tercero de BUP que han seguido una enseñanza reglada y profesional durante 6 años. La muestra contiene un total de 1022 realizaciones que se analizan a partir de una ficha informatizada. Se considera fundamental la relación que hay entre el input de cada una de las unidades de Cher Ami y el output del alumno después de 4-8 horas de trabajo, a partir de unos modelos de lecturas. Frente a estas lecturas, los alumnos deben contestar en un texto escrito de 5-10 frases. Se diseña una ficha donde los profesores tutores de los centros participantes transcriben las realizaciones escritas libres de los alumnos en cada una de las 18 unidades. Cada tutor estudia los errores, analiza semántica y pragmáticamente, y señala las realizaciones relevantes. El programa informático compara el input utilizado con el input dado, y elabora listas con el input utilizado, el no utilizado y extrainput (elementos utilizados por el alumno que no figuran en los materiales dados). El equipo investigador unifica criterios, limpia las listas correspondientes y prepara la entrada de datos para el análisis de resultados. Prueba, programa de ordenador. Porcentajes, tablas, estudio contrastivo. Se agrupan en cuatro grandes bloques. 1. Resultados por unidades didácticas que permiten un análisis de contraste con los contenidos y estrategias de cada una de las unidades del método utilizado, y llevan a la modificación de algunos aspectos concretos de dichos materiales. 2. El análisis global, que contiene los análisis de los errores más frecuentes y sus causas. El análisis semántico permite conocer los campos de interés del alumno, con un tipo de realizaciones muy centradas en sí mismos y en su entorno inmediato. 3. El estudio contrastivo con Bachiller ofrece como resultado el hecho sorprendente de que en un año y con una metodología programada se obtienen en el CAD resultados mucho más satisfactorios que en alumnos de bachiller que estudian el idioma de manera presencial por sexto año. 4. En el estudio individual secuencial, aunque los resultados son menos definitivos, se observa un mayor progreso en los alumnos que respetan el input, mientras que los que utilizan elementos de extrainput no sólo cometen más errores, sino que algunos de ellos llegan a verse bloqueados en su propio aprendizaje. Estos dos últimos bloques de resultados prueban la teoría de los autores de la investigación, de una mayor conveniencia del aprendizaje de una microlengua LE acumulativa y controlada, en vez de una interlengua que corre el riesgo de fosilizarse y bloquear al alumno en su proceso de aprendizaje, siempre que se trate de contextos no naturales y con un planteamiento curricular. Estos objetivos comunicativos, que se deben poder medir en su progreso, proporcionan una motivación intrínseca en el alumno que le hace ser partícipe de su aprendizaje y esto contribuirá a una mayor eficacia en el proceso de enseñanza-aprendizaje de los idiomas. Conviene revisar los contenidos y métodos de la enseñanza reglada y presencial.
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
We've developed a new ambient occlusion technique based on an information-theoretic framework. Essentially, our method computes a weighted visibility from each object polygon to all viewpoints; we then use these visibility values to obtain the information associated with each polygon. So, just as a viewpoint has information about the model's polygons, the polygons gather information on the viewpoints. We therefore have two measures associated with an information channel defined by the set of viewpoints as input and the object's polygons as output, or vice versa. From this polygonal information, we obtain an occlusion map that serves as a classic ambient occlusion technique. Our approach also offers additional applications, including an importance-based viewpoint-selection guide, and a means of enhancing object features and producing nonphotorealistic object visualizations
Resumo:
Context-aware multimodal interactive systems aim to adapt to the needs and behavioural patterns of users and offer a way forward for enhancing the efficacy and quality of experience (QoE) in human-computer interaction. The various modalities that constribute to such systems each provide a specific uni-modal response that is integratively presented as a multi-modal interface capable of interpretation of multi-modal user input and appropriately responding to it through dynamically adapted multi-modal interactive flow management , This paper presents an initial background study in the context of the first phase of a PhD research programme in the area of optimisation of data fusion techniques to serve multimodal interactivite systems, their applications and requirements.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
To improve the welfare of the rural poor and keep them in the countryside, the government of Botswana has been spending 40% of the value of agricultural GDP on agricultural support services. But can investment make smallholder agriculture prosperous in such adverse conditions? This paper derives an answer by applying a two-output six-input stochastic translog distance function, with inefficiency effects and biased technical change to panel data for the 18 districts and the commercial agricultural sector, from 1979 to 1996 This model demonstrates that herds are the most important input, followed by draft power. land and seeds. Multilateral indices for technical change, technical efficiency and total factor productivity (TFP) show that the technology level of the commercial agricultural sector is more than six times that of traditional agriculture and that the gap has been increasing, due to technological regression in traditional agriculture and modest progress in commercial agriculture. Since the levels of efficiency are similar, the same patient is repeated by the TFP indices. This result highlights the policy dilemma of the trade-off between efficiency and equity objectives.
Resumo:
Technology involving genetic modification of crops has the potential to make a contribution to rural poverty reduction in many developing countries. Thus far, insecticide-producing 'Bt' varieties of cotton have been the main GM crops under cultivation in developing nations. Several studies have evaluated the farm-level performance of Bt varieties in comparison to conventional ones by estimating production technology, and have mostly found Bt technology to be very successful in raising output and/or reducing insecticide input. However, the production risk properties of this technology have not been studied, although they are likely to be important to risk-averse smallholders. This study investigates the output risk aspects of Bt technology using a three-year farm-level dataset on smallholder cotton production in Makhathini flats, Kwa-Zulu Natal, South Africa. Stochastic dominance and stochastic production function estimation methods are used to examine the risk properties of the two technologies. Results indicate that Bt technology increases output risk by being most effective when crop growth conditions are good, but being less effective when conditions are less favourable. However, in spite of its risk increasing effect, the mean output performance of Bt cotton is good enough to make it preferable to conventional technology even for risk-averse smallholders.
Resumo:
Graphical tracking is a technique for crop scheduling where the actual plant state is plotted against an ideal target curve which encapsulates all crop and environmental characteristics. Management decisions are made on the basis of the position of the actual crop against the ideal position. Due to the simplicity of the approach it is possible for graphical tracks to be developed on site without the requirement for controlled experimentation. Growth models and graphical tracks are discussed, and an implementation of the Richards curve for graphical tracking described. In many cases, the more intuitively desirable growth models perform sub-optimally due to problems with the specification of starting conditions, environmental factors outside the scope of the original model and the introduction of new cultivars. Accurate specification for a biological model requires detailed and usually costly study, and as such is not adaptable to a changing cultivar range and changing cultivation techniques. Fitting of a new graphical track for a new cultivar can be conducted on site and improved over subsequent seasons. Graphical tracking emphasises the current position relative to the objective, and as such does not require the time consuming or system specific input of an environmental history, although it does require detailed crop measurement. The approach is flexible and could be applied to a variety of specification metrics, with digital imaging providing a route for added value. For decision making regarding crop manipulation from the observed current state, there is a role for simple predictive modelling over the short term to indicate the short term consequences of crop manipulation.
Resumo:
Purpose – The purpose of this paper is to investigate the concepts of intelligent buildings (IBs), and the opportunities offered by the application of computer-aided facilities management (CAFM) systems. Design/methodology/approach – In this paper definitions of IBs are investigated, particularly definitions that are embracing open standards for effective operational change, using a questionnaire survey. The survey further investigated the extension of CAFM to IBs concepts and the opportunities that such integrated systems will provide to facilities management (FM) professionals. Findings – The results showed variation in the understanding of the concept of IBs and the application of CAFM. The survey showed that 46 per cent of respondents use a CAFM system with a majority agreeing on the potential of CAFM in delivery of effective facilities. Research limitations/implications – The questionnaire survey results are limited to the views of the respondents within the context of FM in the UK. Practical implications – Following on the many definitions of an IB does not necessarily lead to technologies of equipment that conform to an open standard. This open standard and documentation of systems produced by vendors is the key to integrating CAFM with other building management systems (BMS) and further harnessing the application of CAFM for IBs. Originality/value – The paper gives experience-based suggestions for both demand and supply sides of the service procurement to gain the feasible benefits and avoid the currently hindering obstacles, as the paper provides insight to the current and future tools for the mobile aspects of FM. The findings are relevant for service providers and operators as well.
Resumo:
Once unit-cell dimensions have been determined from a powder diffraction data set and therefore the crystal system is known (e.g. orthorhombic), the method presented by Markvardsen, David, Johnson & Shankland [Acta Cryst. (2001), A57, 47-54] can be used to generate a table ranking the extinction symbols of the given crystal system according to probability. Markvardsen et al. tested a computer program (ExtSym) implementing the method against Pawley refinement outputs generated using the TF12LS program [David, Ibberson & Matthewman (1992). Report RAL-92-032. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, UK]. Here, it is shown that ExtSym can be used successfully with many well known powder diffraction analysis packages, namely DASH [David, Shankland, van de Streek, Pidcock, Motherwell & Cole (2006). J. Appl. Cryst. 39, 910-915], FullProf [Rodriguez-Carvajal (1993). Physica B, 192, 55-69], GSAS [Larson & Von Dreele (1994). Report LAUR 86-748. Los Alamos National Laboratory, New Mexico, USA], PRODD [Wright (2004). Z. Kristallogr. 219, 1-11] and TOPAS [Coelho (2003). Bruker AXS GmbH, Karlsruhe, Germany]. In addition, a precise description of the optimal input for ExtSym is given to enable other software packages to interface with ExtSym and to allow the improvement/modification of existing interfacing scripts. ExtSym takes as input the powder data in the form of integrated intensities and error estimates for these intensities. The output returned by ExtSym is demonstrated to be strongly dependent on the accuracy of these error estimates and the reason for this is explained. ExtSym is tested against a wide range of data sets, confirming the algorithm to be very successful at ranking the published extinction symbol as the most likely. (C) 2008 International Union of Crystallography Printed in Singapore - all rights reserved.