980 resultados para Active Front End
Resumo:
"Mémoire présenté à la faculté des études supérieures en vue de l'obtention du grade de Maître en droit (LL.M.)"
Resumo:
La littérature contient une abondance d’information sur les approches de design impliquant les utilisateurs. Bien que les chercheurs soulèvent de nombreux avantages concernant ces approches, on en sait peu sur ce que les concepteurs des entreprises en pensent. Ce projet a pour but de connaître les perceptions des concepteurs de produits quant aux outils de design participatif puis, d’identifier les opportunités et limites qu’ils évoquent à ce sujet, et finalement, de faire des suggestions d’outils qui faciliteraient l’introduction du design participatif dans un processus de design existant. Après avoir fait un survol du domaine du design participatif et de ses outils, six cas sont étudiés au moyen d’entrevues semi-dirigées conduites auprès de concepteurs de produits. Les données sont analysées à l’aide de cartes cognitives. En ce qui concerne les outils de design participatif, les participants rencontrés perçoivent un accès direct aux besoins des utilisateurs et la possibilité de minimiser les erreurs en début de processus donc, d’éviter les modifications coûteuses qu’elles auraient entraînées. Les obstacles perçus par les concepteurs sont principalement liés à la résistance au changement, à la crainte de laisser créer ou décider les utilisateurs, ainsi qu’au manque de temps et de ressources de l’équipe. Finalement, sur la base des informations collectées, nous suggérons quatre outils de design participatif qui semblent plus intéressants : l’enquête contextuelle, les sondes, les tests de prototypes et l’approche « lead user ». Pour faire suite à ce travail, il serait intéressant d’élaborer un protocole de recherche plus exhaustif pour augmenter la portée des résultats, ou encore, d’appliquer le design participatif, dans une entreprise, afin d’explorer la satisfaction des gens quant aux produits conçus, les effets collatéraux sur les équipes impliquées, l’évolution des prototypes ou le déroulement des ateliers.
Resumo:
The dynamics of plasma plume, formed by the laser-blow-off of multicomponent LiF-C thin film under various ambient pressures ranging from high vacuum to argon pressure of 3 Torr, has been studied using fast imaging technique. In vacuum, the plume has ellipsoidal shape. With the increase in the ambient pressure, sharp plume boundary is developed showing a focusing-like confinement in the lateral space behavior in the front end, which persists for long times. At higher ambient pressure (> 10−1 Torr ), structures are developed in the plasma plume due to hydrodynamic instability/turbulences.
Resumo:
Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.
Resumo:
Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.
Resumo:
Animportant step in the residue number system(RNS) based signal processing is the conversion of signal into residue domain. Many implementations of this conversion have been proposed for various goals, and one of the implementations is by a direct conversion from an analogue input. A novel approach for analogue-to-residue conversion is proposed in this research using the most popular Sigma–Delta analogue-to-digital converter (SD-ADC). In this approach, the front end is the same as in traditional SD-ADC that uses Sigma–Delta (SD) modulator with appropriate dynamic range, but the filtering is doneby a filter implemented usingRNSarithmetic. Hence, the natural output of the filter is an RNS representation of the input signal. The resolution, conversion speed, hardware complexity and cost of implementation of the proposed SD based analogue-to-residue converter are compared with the existing analogue-to-residue converters based on Nyquist rate ADCs
Resumo:
Con el siguiente proyecto se pretende explicar cómo se realiza la integración de las técnicas de mercadeo y la relación estratégica comunitaria, debido a que las organizaciones utilizan conceptos comunitarios. Se analizan las principales estrategias de mercadeo como marketing mix, geomarketing, mercadeo de servicios, mercadeo relacional y mercadeo social. Se explican las técnicas de mercadeo como mercadeo directo, diferenciación de productos, segmentación de mercado, investigación de mercados, inteligencia de mercados, optimización de canales de distribución y comercio electrónico. Adicionalmente, se exponen las estrategias comunitarias como coaliciones comunitarias, organizaciones de base, liderazgo comunitario y empoderamiento. La metodología implementada para este proyecto es de tipo teórico-conceptual y reúne los aportes de varios documentos científicos de diversas áreas del conocimiento. Las fuentes de información, conceptos y teorías se seleccionan según el criterio del investigador en función de las posibilidades descriptivas de la integración propuesta. En esta investigación se concluye que las técnicas y las estrategias de mercadeo permiten la comunicación entre las organizaciones y las comunidades. Esto posibilita que exista participación entre ambas partes y es un factor clave para el surgimiento de la relación estratégica comunitaria. Se recomienda realizar investigaciones posteriores sobre la relación estratégica comunitaria, aplicadas a organizaciones y comunidades.
Resumo:
La creciente dinamización de las IDE's genera una demanda de la construcción de Geoportales y por ende la demanda de herramientas que además de facilitar su construcción, configuración e implementación, ofrezcan la posibilidad de contratar un soporte técnico profesionalizado. OpenGeo Suite, paquete de software libre profesional e integrado, que permite desde el almacenamiento de datos geográficos, hasta su publicación utilizando estándares OGC e implementación de soluciones web GIS con librerías de código abierto Javascript. OpenGeo Suite permite un despliegue multiplataforma (Linux, Windows y OSX), con cuatro componentes de software libre fuertemente integrados basados en el uso de estándares OGC. Los componentes del lado del servidor están orientados al almacenamiento, configuración y publicación de datos por parte de usuarios técnicos en SIG: PostgreSQL+ la extensión espacial PostGIS que se encarga del almacenamiento de la información geográfica dando soporte a funciones de análisis espacial. pgAdmin como sistema de gestión de base de datos, facilitando la importación y actualización de datos. Geoserver se encarga de la publicación de la información geográfica proveniente de diferentes orígenes de datos: PostGIS, SHP, Oracle Spatial, GeoTIFF, etc. soportando la mayoría de estándares OGC de publicación de información geográfica WMS, WFS, WCS y de formatos GML, KML, GeoJSON, SLD. Además, ofrece soporte a cacheado de teselas a través de Geowebcache. OpenGeo Suite ofrece dos aplicaciones: GeoExplorer y GeoEditor, que permiten al técnico construir un Geoportal con capacidades de edición de geometrías.OpenGeo Suite ofrece una consola de administración (Dashboard) que facilita la configuración de los componentes de administración. Del lado del cliente, los componentes son librerías de desarrollo JavaScript orientadas a desarrolladores de aplicaciones Web SIG. OpenLayers con soporte para capas raster, vectoriales, estilos, proyecciones, teselado, herramientas de edición, etc. Por último, GeoExt para la construcción del front-end de Geoportales, basada en ExtJS y fuertemente acoplada a OpenLayers
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
The complexity of construction projects and the fragmentation of the construction industry undertaking those projects has effectively resulted in linear, uncoordinated and highly variable project processes in the UK construction sector. Research undertaken at the University of Salford resulted in the development of an improved project process, the Process Protocol, which considers the whole lifecycle of a construction project whilst integrating its participants under a common framework. The Process Protocol identifies the various phases of a construction project with particular emphasis on what is described in the manufacturing industry as the ‘fuzzy front end’. The participants in the process are described in terms of the activities that need to be undertaken in order to achieve a successful project and process execution. In addition, the decision-making mechanisms, from a client perspective, are illustrated and the foundations for a learning organization/industry are facilitated within a consistent Process Protocol.
Resumo:
Letter identification is a critical front end of the reading process. In general, conceptualizations of the identification process have emphasized arbitrary sets of distinctive features. However, a richer view of letter processing incorporates principles from the field of type design, including an emphasis on uniformities across letters within a font. The importance of uniformities is supported by a small body of research indicating that consistency of font increases letter identification efficiency. We review design concepts and the relevant literature, with the goal of stimulating further thinking about letter processing during reading.
Resumo:
The construction industry is widely recognised as being inherent with risk and uncertainty. This necessitates the need for effective project risk management to achieve the project objectives of time, cost and quality. A popular tool employed in projects to aid in the management of risk is a risk register. This tool documents the project risks and is often employed by the Project Manager (PM) to manage the associated risks on a project. This research aims to ascertain how widely risk registers are used by Project Managers as part of their risk management practices. To achieve this aim entailed interviewing ten PMs, to discuss their use of the risk register as a risk management tool. The results from these interviews indicated the prevalent use of this document and recognised its effectiveness in the management of project risks. The findings identified the front end and feasibility phases of a project as crucial stages for using risk registers, noting it as a vital ingredient in the risk response planning of the decision making process. Moreover, the composition of the risk register was also understood, with an insight into how PMs produce and develop this tool also ascertained. In conclusion, this research signifies the extensive use of the risk register by PMs. A majority of PMs were of the view that risk registers constitute an essential component of their project risk management practices. This suggests a need for further research on the extent to which risk registers actually help PMs to control the risks in a construction project, particularly residual risks, and how this can be improved to minimize deviations from expected outcomes.
Resumo:
The Main Injector Neutrino Oscillation Search (MINOS) experiment uses an accelerator-produced neutrino beam to perform precision measurements of the neutrino oscillation parameters in the ""atmospheric neutrino"" sector associated with muon neutrino disappearance. This long-baseline experiment measures neutrino interactions in Fermilab`s NuMI neutrino beam with a near detector at Fermilab and again 735 km downstream with a far detector in the Soudan Underground Laboratory in northern Minnesota. The two detectors are magnetized steel-scintillator tracking calorimeters. They are designed to be as similar as possible in order to ensure that differences in detector response have minimal impact on the comparisons of event rates, energy spectra and topologies that are essential to MINOS measurements of oscillation parameters. The design, construction, calibration and performance of the far and near detectors are described in this paper. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The H.R. MacMillan Space Centre is a multi-faceted organization whose mission is to educate, inspire and evoke a sense of wonder about the universe, our planet and space exploration. As a popular, Vancouver science centre, it faces the same range of challenges and issues as other major attractions: how does the Space Centre maintain a healthy public attendance in an increasingly competitive market where visitors continue to be presented with an increasingly rich range of choices for their leisure spending and entertainment dollars?This front-end study investigated visitor attitudes, thoughts and preconceptions on the topic of space and astronomy. It also examined visitors’ motivations for coming to a space science centre. Useful insights were obtained which will be applied to improve future programme content and exhibit development.
Resumo:
Most science centres in Canada employ science-educated floor staff to motivate visitorsto have fun while enhancing the educational reach of the exhibits. Although bright andsensitive to visitors’ needs, floor staff are rarely consulted in the planning,implementation, and modification phases of an exhibit. Instead, many developmentteams rely on costly third-party evaluations or skip the front-end and formativeevaluations all together, leading to costly errors that could have been avoided. This studywill seek to reveal a correlation between floor staff’s perception of visitors’ interactionswith an exhibit and visitors’ actual experiences. If a correlation exists, a recommendationcould be made to encourage planning teams to include floor staff in the formative andsummative evaluations of an exhibit. This is especially relevant to science centres withlimited budgets and for whom a divide exists between floor staff and management.In this study, a formative evaluation of one exhibit was conducted, measuring both floorstaff’s perceptions of the visitor experience and visitors’ own perceptions of the exhibit.Floor staff were then trained on visitor evaluation methods. A week later, floor staff andvisitors were surveyed a second time on a different exhibit to determine whether anincrease in accuracy existed.The training session increased the specificity of the motivation and comprehensionresponses and the enthusiasm of the staff, but not their ability to predict observedbehaviours with respect to ergonomics, learning indicators, holding power, and successrates. The results revealed that although floor staff underestimated visitors’ success ratesat the exhibits, staff accurately predicted visitors’ behaviours with respect to holdingpower, ergonomics, learning indicators, motivation and comprehension, both before andafter the staff training.