975 resultados para Practical problems
Resumo:
The time is ripe for a comprehensive mission to explore and document Earth's species. This calls for a campaign to educate and inspire the next generation of professional and citizen species explorers, investments in cyber-infrastructure and collections to meet the unique needs of the producers and consumers of taxonomic information, and the formation and coordination of a multi-institutional, international, transdisciplinary community of researchers, scholars and engineers with the shared objective of creating a comprehensive inventory of species and detailed map of the biosphere. We conclude that an ambitious goal to describe 10 million species in less than 50 years is attainable based on the strength of 250 years of progress, worldwide collections, existing experts, technological innovation and collaborative teamwork. Existing digitization projects are overcoming obstacles of the past, facilitating collaboration and mobilizing literature, data, images and specimens through cyber technologies. Charting the biosphere is enormously complex, yet necessary expertise can be found through partnerships with engineers, information scientists, sociologists, ecologists, climate scientists, conservation biologists, industrial project managers and taxon specialists, from agrostologists to zoophytologists. Benefits to society of the proposed mission would be profound, immediate and enduring, from detection of early responses of flora and fauna to climate change to opening access to evolutionary designs for solutions to countless practical problems. The impacts on the biodiversity, environmental and evolutionary sciences would be transformative, from ecosystem models calibrated in detail to comprehensive understanding of the origin and evolution of life over its 3.8 billion year history. The resultant cyber-enabled taxonomy, or cybertaxonomy, would open access to biodiversity data to developing nations, assure access to reliable data about species, and change how scientists and citizens alike access, use and think about biological diversity information.
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Die rasante Entwicklung der Computerindustrie durch die stetige Verkleinerung der Transistoren führt immer schneller zum Erreichen der Grenze der Si-Technologie, ab der die Tunnelprozesse in den Transistoren ihre weitere Verkleinerung und Erhöhung ihrer Dichte in den Prozessoren nicht mehr zulassen. Die Zukunft der Computertechnologie liegt in der Verarbeitung der Quanteninformation. Für die Entwicklung von Quantencomputern ist die Detektion und gezielte Manipulation einzelner Spins in Festkörpern von größter Bedeutung. Die Standardmethoden der Spindetektion, wie ESR, erlauben jedoch nur die Detektion von Spinensembles. Die Idee, die das Auslesen von einzelnen Spins ermöglich sollte, besteht darin, die Manipulation getrennt von der Detektion auszuführen.rn Bei dem NV−-Zentrum handelt es sich um eine spezielle Gitterfehlstelle im Diamant, die sich als einen atomaren, optisch auslesbaren Magnetfeldsensor benutzen lässt. Durch die Messung seiner Fluoreszenz sollte es möglich sein die Manipulation anderer, optisch nicht detektierbaren, “Dunkelspins“ in unmittelbarer Nähe des NV-Zentrums mittels der Spin-Spin-Kopplung zu detektieren. Das vorgeschlagene Modell des Quantencomputers basiert auf dem in SWCNT eingeschlossenen N@C60.Die Peapods, wie die Einheiten aus den in Kohlenstoffnanoröhre gepackten Fullerenen mit eingefangenem Stickstoff genannt werden, sollen die Grundlage für die Recheneinheiten eines wahren skalierbaren Quantencomputers bilden. Die in ihnen mit dem Stickstoff-Elektronenspin durchgeführten Rechnungen sollen mit den oberflächennahen NV-Zentren (von Diamantplatten), über denen sie positioniert sein sollen, optisch ausgelesen werden.rnrnDie vorliegende Arbeit hatte das primäre Ziel, die Kopplung der oberflächennahen NV-Einzelzentren an die optisch nicht detektierbaren Spins der Radikal-Moleküle auf der Diamantoberfläche mittels der ODMR-Kopplungsexperimente optisch zu detektieren und damit entscheidende Schritte auf dem Wege der Realisierung eines Quantenregisters zu tun.rn Es wurde ein sich im Entwicklungsstadium befindende ODMR-Setup wieder aufgebaut und seine bisherige Funktionsweise wurde an kommerziellen NV-Zentrum-reichen Nanodiamanten verifiziert. Im nächsten Schritt wurde die Effektivität und Weise der Messung an die Detektion und Manipulation der oberflächennah (< 7 nm Tiefe) implantieren NV-Einzelzenten in Diamantplatten angepasst.Ein sehr großer Teil der Arbeit, der hier nur bedingt beschrieben werden kann, bestand aus derrnAnpassung der existierenden Steuersoftware an die Problematik der praktischen Messung. Anschließend wurde die korrekte Funktion aller implementierten Pulssequenzen und anderer Software-Verbesserungen durch die Messung an oberflächennah implantierten NV-Einzelzentren verifiziert. Auch wurde der Messplatz um die zur Messung der Doppelresonanz notwendigen Komponenten wie einen steuerbaren Elektromagneten und RF-Signalquelle erweitert. Unter der Berücksichtigung der thermischen Stabilität von N@C60 wurde für zukünftige Experimente auch ein optischer Kryostat geplant, gebaut, in das Setup integriert und charakterisiert.rn Die Spin-Spin-Kopplungsexperimente wurden mit dem sauerstoffstabilen Galvinoxyl-Radikalals einem Modell-System für Kopplung durchgeführt. Dabei wurde über die Kopplung mit einem NVZentrum das RF-Spektrum des gekoppelten Radikal-Spins beobachtet. Auch konnte von dem gekoppelten Spin eine Rabi-Nutation aufgenommen werden.rn Es wurden auch weitere Aspekte der Peapod Messung und Oberflächenimplantation betrachtet.Es wurde untersucht, ob sich die NV-Detektion durch die SWCNTs, Peapods oder Fullerene stören lässt. Es zeigte sich, dass die Komponenten des geplanten Quantencomputers, bis auf die C60-Cluster, für eine ODMR-Messanordnung nicht detektierbar sind und die NV-Messung nicht stören werden. Es wurde auch betrachtet, welche Arten von kommerziellen Diamantplatten für die Oberflächenimplantation geeignet sind, für die Kopplungsmessungen geeignete Dichte der implantierten NV-Zentren abgeschätzt und eine Implantation mit abgeschätzter Dichte betrachtet.
Resumo:
An introductory course in probability and statistics for third-year and fourth-year electrical engineering students is described. The course is centered around several computer-based projects that are designed to achieve two objectives. First, the projects illustrate the course topics and provide hands-on experience for the students. The second and equally important objective of the projects is to convey the relevance and usefulness of probability and statistics to practical problems that undergraduate students can appreciate. The benefit of this course as to motivate electrical engineering students to excel in the study of probability concepts, instead of viewing the subject as one more course requirement toward graduation. The authors co-teach the course, and MATLAB is used for mast of the computer-based projects
Resumo:
The theory on the intensities of 4f-4f transitions introduced by B.R. Judd and G.S. Ofelt in 1962 has become a center piece in rare-earth optical spectroscopy over the past five decades. Many fundamental studies have since explored the physical origins of the Judd–Ofelt theory and have proposed numerous extensions to the original model. A great number of studies have applied the Judd–Ofelt theory to a wide range of rare-earth doped materials, many of them with important applications in solid-state lasers, optical amplifiers, phosphors for displays and solid state lighting, upconversion and quantum-cutting materials, and fluorescent markers. This paper takes the view of the experimentalist who is interested in appreciating the basic concepts, implications, assumptions, and limitations of the Judd–Ofelt theory in order to properly apply it to practical problems. We first present the formalism for calculating the wavefunctions of 4f electronic states in a concise form and then show their application to the calculation and fitting of 4f-4f transition intensities. The potential, limitations and pitfalls of the theory are discussed, and a detailed case study of LaCl3:Er3+ is presented.
Resumo:
The present paper introduces the topical area of the Polish-Swiss research project FLORIST (Flood risk on the northern foothills of the Tatra Mountains), informs on its objectives, and reports on initial results. The Tatra Mountains are the area of the highest precipitation in Poland and largely contribute to flood generation. The project is focused around four competence clusters: observation-based climatology, model-based climate change projections and impact assessment, dendrogeomorphology, and impact of large wood debris on fluvial processes. The knowledge generated in the FLORIST project is likely to have impact on understanding and interpretation of flood risk on the northern foothills of the Tatra Mountains, in the past, present, and future. It can help solving important practical problems related to flood risk reduction strategies and flood preparedness.
Resumo:
An Advanced Planning System (APS) offers support at all planning levels along the supply chain while observing limited resources. We consider an APS for process industries (e.g. chemical and pharmaceutical industries) consisting of the modules network design (for long–term decisions), supply network planning (for medium–term decisions), and detailed production scheduling (for short–term decisions). For each module, we outline the decision problem, discuss the specifi cs of process industries, and review state–of–the–art solution approaches. For the module detailed production scheduling, a new solution approach is proposed in the case of batch production, which can solve much larger practical problems than the methods known thus far. The new approach decomposes detailed production scheduling for batch production into batching and batch scheduling. The batching problem converts the primary requirements for products into individual batches, where the work load is to be minimized. We formulate the batching problem as a nonlinear mixed–integer program and transform it into a linear mixed–binary program of moderate size, which can be solved by standard software. The batch scheduling problem allocates the batches to scarce resources such as processing units, workers, and intermediate storage facilities, where some regular objective function like the makespan is to be minimized. The batch scheduling problem is modelled as a resource–constrained project scheduling problem, which can be solved by an efficient truncated branch–and–bound algorithm developed recently. The performance of the new solution procedures for batching and batch scheduling is demonstrated by solving several instances of a case study from process industries.
Resumo:
Context-sensitive analysis provides information which is potentially more accurate than that provided by context-free analysis. Such information can then be applied in order to validate/debug the program and/or to specialize the program obtaining important improvements. Unfortunately, context-sensitive analysis of modular programs poses important theoretical and practical problems. One solution, used in several proposals, is to resort to context-free analysis. Other proposals do address context-sensitive analysis, but are only applicable when the description domain used satisfies rather restrictive properties. In this paper, we argüe that a general framework for context-sensitive analysis of modular programs, Le., one that allows using all the domains which have proved useful in practice in the non-modular setting, is indeed feasible and very useful. Driven by our experience in the design and implementation of analysis and specialization techniques in the context of CiaoPP, the Ciao system preprocessor, in this paper we discuss a number of design goals for context-sensitive analysis of modular programs as well as the problems which arise in trying to meet these goals. We also provide a high-level description of a framework for analysis of modular programs which does substantially meet these objectives. This framework is generic in that it can be instantiated in different ways in order to adapt to different contexts. Finally, the behavior of the different instantiations w.r.t. the design goals that motivate our work is also discussed.
Resumo:
The W3C Best Practises for Multilingual Linked Open Data community group was born one year ago during the last MLW workshop in Rome. Nowadays, it continues leading the effort of a numerous community towards acquiring a shared view of the issues caused by multilingualism on the Web of Data and their possible solutions. Despite our initial optimism, we found the task of identifying best practises for ML-LOD a difficult one, requiring a deep understanding of the Web of Data in its multilingual dimension and in its practical problems. In this talk we will review the progresses of the group so far, mainly in the identification and analysis of topics, use cases, and design patterns, as well as the future challenges.
Resumo:
El correcto pronóstico en el ámbito de la logística de transportes es de vital importancia para una adecuada planificación de medios y recursos, así como de su optimización. Hasta la fecha los estudios sobre planificación portuaria se basan principalmente en modelos empíricos; que se han utilizado para planificar nuevas terminales y desarrollar planes directores cuando no se dispone de datos iniciales, analíticos; más relacionados con la teoría de colas y tiempos de espera con formulaciones matemáticas complejas y necesitando simplificaciones de las mismas para hacer manejable y práctico el modelo o de simulación; que requieren de una inversión significativa como para poder obtener resultados aceptables invirtiendo en programas y desarrollos complejos. La Minería de Datos (MD) es un área moderna interdisciplinaria que engloba a aquellas técnicas que operan de forma automática (requieren de la mínima intervención humana) y, además, son eficientes para trabajar con las grandes cantidades de información disponible en las bases de datos de numerosos problemas prácticos. La aplicación práctica de estas disciplinas se extiende a numerosos ámbitos comerciales y de investigación en problemas de predicción, clasificación o diagnosis. Entre las diferentes técnicas disponibles en minería de datos las redes neuronales artificiales (RNA) y las redes probabilísticas o redes bayesianas (RB) permiten modelizar de forma conjunta toda la información relevante para un problema dado. En el presente trabajo se han analizado dos aplicaciones de estos casos al ámbito portuario y en concreto a contenedores. En la Tesis Doctoral se desarrollan las RNA como herramienta para obtener previsiones de tráfico y de recursos a futuro de diferentes puertos, a partir de variables de explotación, obteniéndose valores continuos. Para el caso de las redes bayesianas (RB), se realiza un trabajo similar que para el caso de las RNA, obteniéndose valores discretos (un intervalo). El principal resultado que se obtiene es la posibilidad de utilizar tanto las RNA como las RB para la estimación a futuro de parámetros físicos, así como la relación entre los mismos en una terminal para una correcta asignación de los medios a utilizar y por tanto aumentar la eficiencia productiva de la terminal. Como paso final se realiza un estudio de complementariedad de ambos modelos a corto plazo, donde se puede comprobar la buena aceptación de los resultados obtenidos. Por tanto, se puede concluir que estos métodos de predicción pueden ser de gran ayuda a la planificación portuaria. The correct assets’ forecast in the field of transportation logistics is a matter of vital importance for a suitable planning and optimization of the necessary means and resources. Up to this date, ports planning studies were basically using empirical models to deal with new terminals planning or master plans development when no initial data are available; analytical models, more connected to the queuing theory and the waiting times, and very complicated mathematical formulations requiring significant simplifications to acquire a practical and easy to handle model; or simulation models, that require a significant investment in computer codes and complex developments to produce acceptable results. The Data Mining (DM) is a modern interdisciplinary field that include those techniques that operate automatically (almost no human intervention is required) and are highly efficient when dealing with practical problems characterized by huge data bases containing significant amount of information. These disciplines’ practical application extends to many commercial or research fields, dealing with forecast, classification or diagnosis problems. Among the different techniques of the Data Mining, the Artificial Neuronal Networks (ANN) and the probabilistic – or Bayesian – networks (BN) allow the joint modeling of all the relevant information for a given problem. This PhD work analyses their application to two practical cases in the ports field, concretely to container terminals. This PhD work details how the ANN have been developed as a tool to produce traffic and resources forecasts for several ports, based on exploitation variables to obtain continuous values. For the Bayesian networks case (BN), a similar development has been carried out, obtaining discreet values (an interval). The main finding is the possibility to use ANN and BN to estimate future needs of the port’s or terminal’s physical parameters, as well as the relationship between them within a specific terminal, that allow a correct assignment of the necessary means and, thus, to increase the terminal’s productive efficiency. The final step is a short term complementarily study of both models, carried out in order to verify the obtained results. It can thus be stated that these prediction methods can be a very useful tool in ports’ planning.
Resumo:
La motivación de esta tesis es el desarrollo de una herramienta de optimización automática para la mejora del rendimiento de formas aerodinámicas enfocado en la industria aeronáutica. Este trabajo cubre varios aspectos esenciales, desde el empleo de Non-Uniform Rational B-Splines (NURBS), al cálculo de gradientes utilizando la metodología del adjunto continuo, el uso de b-splines volumétricas como parámetros de diseño, el tratamiento de la malla en las intersecciones, y no menos importante, la adaptación de los algoritmos de la dinámica de fluidos computacional (CFD) en arquitecturas hardware de alto paralelismo, como las tarjetas gráficas, para acelerar el proceso de optimización. La metodología adjunta ha posibilitado que los métodos de optimización basados en gradientes sean una alternativa prometedora para la mejora de la eficiencia aerodinámica de los aviones. La formulación del adjunto permite calcular los gradientes de una función de coste, como la resistencia aerodinámica o la sustentación, independientemente del número de variables de diseño, a un coste computacional equivalente a una simulación CFD. Sin embargo, existen problemas prácticos que han imposibilitado su aplicación en la industria, que se pueden resumir en: integrabilidad, rendimiento computacional y robustez de la solución adjunta. Este trabajo aborda estas contrariedades y las analiza en casos prácticos. Como resumen, las contribuciones de esta tesis son: • El uso de NURBS como variables de diseño en un bucle de automático de optimización, aplicado a la mejora del rendimiento aerodinámico de alas en régimen transónico. • El desarrollo de algoritmos de inversión de punto, para calcular las coordenadas paramétricas de las coordenadas espaciales, para ligar los vértices de malla a las NURBS. • El uso y validación de la formulación adjunta para el calculo de los gradientes, a partir de las sensibilidades de la solución adjunta, comparado con diferencias finitas. • Se ofrece una estrategia para utilizar la geometría CAD, en forma de parches NURBS, para tratar las intersecciones, como el ala-fuselaje. • No existen muchas alternativas de librerías NURBS viables. En este trabajo se ha desarrollado una librería, DOMINO NURBS, y se ofrece a la comunidad como código libre y abierto. • También se ha implementado un código CFD en tarjeta gráfica, para realizar una valoración de cómo se puede adaptar un código sobre malla no estructurada a arquitecturas paralelas. • Finalmente, se propone una metodología, basada en la función de Green, como una forma eficiente de paralelizar simulaciones numéricas. Esta tesis ha sido apoyada por las actividades realizadas por el Área de Dinámica da Fluidos del Instituto Nacional de Técnica Aeroespacial (INTA), a través de numerosos proyectos de financiación nacional: DOMINO, SIMUMAT, y CORESFMULAERO. También ha estado en consonancia con las actividades realizadas por el departamento de Métodos y Herramientas de Airbus España y con el grupo Investigación y Tecnología Aeronáutica Europeo (GARTEUR), AG/52. ABSTRACT The motivation of this work is the development of an automatic optimization strategy for large scale shape optimization problems that arise in the aeronautics industry to improve the aerodynamic performance; covering several aspects from the use of Non-Uniform Rational B-Splines (NURBS), the calculation of the gradients with the continuous adjoint formulation, the development of volumetric b-splines parameterization, mesh adaptation and intersection handling, to the adaptation of Computational Fluid Dynamics (CFD) algorithms to take advantage of highly parallel architectures in order to speed up the optimization process. With the development of the adjoint formulation, gradient-based methods for aerodynamic optimization become a promising approach to improve the aerodynamic performance of aircraft designs. The adjoint methodology allows the evaluation the gradients to all design variables of a cost function, such as drag or lift, at the equivalent cost of more or less one CFD simulation. However, some practical problems have been delaying its full implementation to the industry, which can be summarized as: integrability, computer performance, and adjoint robustness. This work tackles some of these issues and analyse them in well-known test cases. As summary, the contributions comprises: • The employment of NURBS as design variables in an automatic optimization loop for the improvement of the aerodynamic performance of aircraft wings in transonic regimen. • The development of point inversion algorithms to calculate the NURBS parametric coordinates from the space coordinates, to link with the computational grid vertex. • The use and validation of the adjoint formulation to calculate the gradients from the surface sensitivities in an automatic optimization loop and evaluate its reliability, compared with finite differences. • This work proposes some algorithms that take advantage of the underlying CAD geometry description, in the form of NURBS patches, to handle intersections and mesh adaptations. • There are not many usable libraries for NURBS available. In this work an open source library DOMINO NURBS has been developed and is offered to the community as free, open source code. • The implementation of a transonic CFD solver from scratch in a graphic card, for an assessment of the implementability of conventional CFD solvers for unstructured grids to highly parallel architectures. • Finally, this research proposes the use of the Green's function as an efficient paralellization scheme of numerical solvers. The presented work has been supported by the activities carried out at the Fluid Dynamics branch of the National Institute for Aerospace Technology (INTA) through national founding research projects: DOMINO, SIMUMAT, and CORESIMULAERO; in line with the activities carried out by the Methods and Tools and Flight Physics department at Airbus and the Group for Aeronautical Research and Technology in Europe (GARTEUR) action group AG/52.
Resumo:
The deployment of systems for human-to-machine communication by voice requires overcoming a variety of obstacles that affect the speech-processing technologies. Problems encountered in the field might include variation in speaking style, acoustic noise, ambiguity of language, or confusion on the part of the speaker. The diversity of these practical problems encountered in the "real world" leads to the perceived gap between laboratory and "real-world" performance. To answer the question "What applications can speech technology support today?" the concept of the "degree of difficulty" of an application is introduced. The degree of difficulty depends not only on the demands placed on the speech recognition and speech synthesis technologies but also on the expectations of the user of the system. Experience has shown that deployment of effective speech communication systems requires an iterative process. This paper discusses general deployment principles, which are illustrated by several examples of human-machine communication systems.
Resumo:
Objective: The aim of this study was to gain a better understanding of the needs of male and female oncology patients within a community cancer setting to inform the provision of psychosocial services. Data obtained from 835 single-page measures of oncology patient distress were collected and analyzed to examine the relationship between gender and reported level of distress, the source of this distress, and requests for follow-up from psychosocial service providers.Method: Patients in medical and radiation oncology were given a distress screener tool that included a distress thermometer, a problem checklist, and a list of psychosocial service providers with whom the patient could request to speak.Results: Women reported higher levels of distress than men (p=.003). Women were also more likely than men to endorse practical problems as the cause of their distress (p=.003). A marginally significant relationship between gender and requesting the cancer resource navigator was also found (p=. 059)Conclusion: Gender is a salient factor in reported distress among cancer patients. Although no single variable can entirely explain an individual's response to cancer, male and female patients do appear to have distinctive, gender-specific needs. Psychosocial interventions that account for differences related to gender-role may be particularly beneficial. These results also illustrate the utility of consistent screening practices to better understand and meet the psychosocial needs of oncology