874 resultados para Chemo- And Multi-enzymatic Processes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fertility of the coastal and estuarine waters is of great concern because of its influence on the productivity of these waters. Seasonal variations in the distribution of organic carbon, total nitrogen and total phosphorus in the sediments of Kuttanad Waters, a part of the tropical Cochin Estuary on the south west coast of India, are examined to identify the contribution of sediments to the fertility of the aquatic systems. The adjoining region has considerable agricultural activity. The fresh water zones had higher quantities of silt and clay whereas the estuarine zone was more sandy. Organic carbon, total phosphorus and total nitrogen were higher in the fresh water zones and lower in the estuarine zones. Total phosphorus and organic carbon showed the lowest values during monsoon periods. No significant trends were observed in the seasonal distributions of total nitrogen. Ratios of C/N, C/P and N/P, and the phosphorus and nitrogen content indicate significant modification in the character of the organic matter. Substantial amounts of the organic matter can contribute to reducing conditions and modify diagenetic processes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a lattice-based visual metaphor for knowledge discovery in electronic mail. It allows a user to navigate email using a visual lattice metaphor rather than a tree structure. By using such a conceptual multi-hierarchy, the content and shape of the lattice can be varied to accommodate any number of queries against the email collection. The system provides more flexibility in retrieving stored emails and can be generalised to any electronic documents. The paper presents the underlying mathematical structures, and a number of examples of the lattice and multi-hierarchy working with a prototypical email collection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die laserinduzierte Plasmaspektroskopie (LIPS) ist eine spektrochemische Elementanalyse zur Bestimmung der atomaren Zusammensetzung einer beliebigen Probe. Für die Analyse ist keine spezielle Probenpräparation nötig und kann unter atmosphärischen Bedingungen an Proben in jedem Aggregatzustand durchgeführt werden. Femtosekunden Laserpulse bieten die Vorteile einer präzisen Ablation mit geringem thermischen Schaden sowie einer hohen Reproduzierbarkeit. Damit ist fs-LIPS ein vielversprechendes Werkzeug für die Mikroanalyse technischer Proben, insbesondere zur Untersuchung ihres Ermüdungsverhaltens. Dabei ist interessant, wie sich die initiierten Mikrorisse innerhalb der materialspezifschen Struktur ausbreiten. In der vorliegenden Arbeit sollte daher ein schnelles und einfach zu handhabendes 3D-Rasterabbildungsverfahren zur Untersuchung der Rissausbreitung in TiAl, einer neuen Legierungsklasse, entwickelt werden. Dazu wurde fs-LIPS (30 fs, 785 nm) mit einem modifizierten Mikroskopaufbau (Objektiv: 50x/NA 0.5) kombiniert, welcher eine präzise, automatisierte Probenpositionierung ermöglicht. Spektrochemische Sensitivität und räumliches Auflösungsvermögen wurden in energieabhängigen Einzel- und Multipulsexperimenten untersucht. 10 Laserpulse pro Position mit einer Pulsenergie von je 100 nJ führten in TiAl zum bestmöglichen Kompromiss aus hohem S/N-Verhältnis von 10:1 und kleinen Lochstrukturen mit inneren Durchmessern von 1.4 µm. Die für das Verfahren entscheidende laterale Auflösung, dem minimalen Lochabstand bei konstantem LIPS-Signal, beträgt mit den obigen Parametern 2 µm und ist die bislang höchste bekannte Auflösung einer auf fs-LIPS basierenden Mikro-/Mapping-Analyse im Fernfeld. Fs-LIPS Scans von Teststrukturen sowie Mikrorissen in TiAl demonstrieren eine spektrochemische Sensitivität von 3 %. Scans in Tiefenrichtung erzielen mit denselben Parametern eine axiale Auflösung von 1 µm. Um die spektrochemische Sensitivität von fs-LIPS zu erhöhen und ein besseres Verständnis für die physikalischen Prozesse während der Laserablation zu erhalten, wurde in Pump-Probe-Experimenten untersucht, in wieweit fs-Doppelpulse den laserinduzierten Abtrag sowie die Plasmaemission beeinflussen. Dazu wurden in einem Mach-Zehnder-Interferometer Pulsabstände von 100 fs bis 2 ns realisiert, Gesamtenergie und Intensitätsverhältnis beider Pulse variiert sowie der Einfluss der Materialparameter untersucht. Sowohl das LIPS-Signal als auch die Lochstrukturen zeigen eine Abhängigkeit von der Verzögerungszeit. Diese wurden in vier verschiedene Regimes eingeteilt und den physikalischen Prozessen während der Laserablation zugeordnet: Die Thermalisierung des Elektronensystems für Pulsabstände unter 1 ps, Schmelzprozesse zwischen 1 und 10 ps, der Beginn des Abtrags nach mehreren 10 ps und die Expansion der Plasmawolke nach über 100 ps. Dabei wird das LIPS-Signal effizient verstärkt und bei 800 ps maximal. Die Lochdurchmesser ändern sich als Funktion des Pulsabstands wenig im Vergleich zur Tiefe. Die gesamte Abtragsrate variiert um maximal 50 %, während sich das LIPS-Signal vervielfacht: Für Ti und TiAl typischerweise um das Dreifache, für Al um das 10-fache. Die gemessenen Transienten zeigen eine hohe Reproduzierbarkeit, jedoch kaum eine Energie- bzw. materialspezifische Abhängigkeit. Mit diesen Ergebnissen wurde eine gezielte Optimierung der DP-LIPS-Parameter an Al durchgeführt: Bei einem Pulsabstand von 800 ps und einer Gesamtenergie von 65 nJ (vierfach über der Ablationsschwelle) wurde eine 40-fache Signalerhöhung bei geringerem Rauschen erzielt. Die Lochdurchmesser vergrößerten sich dabei um 44 % auf (650±150) nm, die Lochtiefe um das Doppelte auf (100±15) nm. Damit war es möglich, die spektrochemische Sensitivität von fs-LIPS zu erhöhen und gleichzeitig die hohe räumliche Auflösung aufrecht zu erhalten.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research of this thesis dissertation covers developments and applications of short-and long-term climate predictions. The short-term prediction emphasizes monthly and seasonal climate, i.e. forecasting from up to the next month over a season to up to a year or so. The long-term predictions pertain to the analysis of inter-annual- and decadal climate variations over the whole 21st century. These two climate prediction methods are validated and applied in the study area, namely, Khlong Yai (KY) water basin located in the eastern seaboard of Thailand which is a major industrial zone of the country and which has been suffering from severe drought and water shortage in recent years. Since water resources are essential for the further industrial development in this region, a thorough analysis of the potential climate change with its subsequent impact on the water supply in the area is at the heart of this thesis research. The short-term forecast of the next-season climate, such as temperatures and rainfall, offers a potential general guideline for water management and reservoir operation. To that avail, statistical models based on autoregressive techniques, i.e., AR-, ARIMA- and ARIMAex-, which includes additional external regressors, and multiple linear regression- (MLR) models, are developed and applied in the study region. Teleconnections between ocean states and the local climate are investigated and used as extra external predictors in the ARIMAex- and the MLR-model and shown to enhance the accuracy of the short-term predictions significantly. However, as the ocean state – local climate teleconnective relationships provide only a one- to four-month ahead lead time, the ocean state indices can support only a one-season-ahead forecast. Hence, GCM- climate predictors are also suggested as an additional predictor-set for a more reliable and somewhat longer short-term forecast. For the preparation of “pre-warning” information for up-coming possible future climate change with potential adverse hydrological impacts in the study region, the long-term climate prediction methodology is applied. The latter is based on the downscaling of climate predictions from several single- and multi-domain GCMs, using the two well-known downscaling methods SDSM and LARS-WG and a newly developed MLR-downscaling technique that allows the incorporation of a multitude of monthly or daily climate predictors from one- or several (multi-domain) parent GCMs. The numerous downscaling experiments indicate that the MLR- method is more accurate than SDSM and LARS-WG in predicting the recent past 20th-century (1971-2000) long-term monthly climate in the region. The MLR-model is, consequently, then employed to downscale 21st-century GCM- climate predictions under SRES-scenarios A1B, A2 and B1. However, since the hydrological watershed model requires daily-scale climate input data, a new stochastic daily climate generator is developed to rescale monthly observed or predicted climate series to daily series, while adhering to the statistical and geospatial distributional attributes of observed (past) daily climate series in the calibration phase. Employing this daily climate generator, 30 realizations of future daily climate series from downscaled monthly GCM-climate predictor sets are produced and used as input in the SWAT- distributed watershed model, to simulate future streamflow and other hydrological water budget components in the study region in a multi-realization manner. In addition to a general examination of the future changes of the hydrological regime in the KY-basin, potential future changes of the water budgets of three main reservoirs in the basin are analysed, as these are a major source of water supply in the study region. The results of the long-term 21st-century downscaled climate predictions provide evidence that, compared with the past 20th-reference period, the future climate in the study area will be more extreme, particularly, for SRES A1B. Thus, the temperatures will be higher and exhibit larger fluctuations. Although the future intensity of the rainfall is nearly constant, its spatial distribution across the region is partially changing. There is further evidence that the sequential rainfall occurrence will be decreased, so that short periods of high intensities will be followed by longer dry spells. This change in the sequential rainfall pattern will also lead to seasonal reductions of the streamflow and seasonal changes (decreases) of the water storage in the reservoirs. In any case, these predicted future climate changes with their hydrological impacts should encourage water planner and policy makers to develop adaptation strategies to properly handle the future water supply in this area, following the guidelines suggested in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autonomous vehicles are increasingly being used in mission-critical applications, and robust methods are needed for controlling these inherently unreliable and complex systems. This thesis advocates the use of model-based programming, which allows mission designers to program autonomous missions at the level of a coach or wing commander. To support such a system, this thesis presents the Spock generative planner. To generate plans, Spock must be able to piece together vehicle commands and team tactics that have a complex behavior represented by concurrent processes. This is in contrast to traditional planners, whose operators represent simple atomic or durative actions. Spock represents operators using the RMPL language, which describes behaviors using parallel and sequential compositions of state and activity episodes. RMPL is useful for controlling mobile autonomous missions because it allows mission designers to quickly encode expressive activity models using object-oriented design methods and an intuitive set of activity combinators. Spock also is significant in that it uniformly represents operators and plan-space processes in terms of Temporal Plan Networks, which support temporal flexibility for robust plan execution. Finally, Spock is implemented as a forward progression optimal planner that walks monotonically forward through plan processes, closing any open conditions and resolving any conflicts. This thesis describes the Spock algorithm in detail, along with example problems and test results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hypermedia systems based on the Web for open distance education are becoming increasingly popular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigational adaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ognyline) There is an increasing pressure on university staff to provide ever more information and resources to students. This study investigated student opinions on (audio) podcasts and (video) vodcasts and how well they met requirements and aided learning processes. Two experiments within the Aston University looked at student opinion on, and usage of, podcasts and vodcasts for a selection of their psychology lectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En un mundo hiperconectado, dinámico y cargado de incertidumbre como el actual, los métodos y modelos analíticos convencionales están mostrando sus limitaciones. Las organizaciones requieren, por tanto, herramientas útiles que empleen tecnología de información y modelos de simulación computacional como mecanismos para la toma de decisiones y la resolución de problemas. Una de las más recientes, potentes y prometedoras es el modelamiento y la simulación basados en agentes (MSBA). Muchas organizaciones, incluidas empresas consultoras, emplean esta técnica para comprender fenómenos, hacer evaluación de estrategias y resolver problemas de diversa índole. Pese a ello, no existe (hasta donde conocemos) un estado situacional acerca del MSBA y su aplicación a la investigación organizacional. Cabe anotar, además, que por su novedad no es un tema suficientemente difundido y trabajado en Latinoamérica. En consecuencia, este proyecto pretende elaborar un estado situacional sobre el MSBA y su impacto sobre la investigación organizacional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo se centra en el análisis de las actividades desarrolladas en torno a los servicios de procesos de impresión que ofrece la organización DATAPOINT de Colombia SAS para identificar los puntos críticos en la gestión de los residuos de impresión y las decisiones tomadas por parte de los involucrados durante todo el proceso (proveedores, clientes y la empresa), con el fin de revisar medidas y estrategias que permitan fortalecer la gestión integral de residuos de impresión a partir de una revisión y comparación de las mejores prácticas planteadas por los actores del sector. También se efectuaron recomendaciones con acciones de mejora que se podrían desarrollar con el fin de mitigar el impacto ambiental generado por estos residuos. Con la finalidad de cumplir con lo planteado se realizó inicialmente un estudio sobre la organización, sus clientes y proveedores para entender de manera integral la cadena de valor en torno a los tóner y su gestión inversa, (explicar) al igual que el entorno normativo tanto de manera nacional como internacional. Posteriormente, se identificaron los puntos de mejora comparando lo planteado por el proveedor versus lo ejecutado por los involucrados en el proceso, labor se realizó en campo con los clientes para entender la situación actual, sus necesidades y en que basan la toma de decisiones relacionada con el manejo de los residuos de impresión. Finalmente se listaran una serie de acciones de mejora y recomendaciones las cuales pueden ser incorporadas a los procesos críticos de DATAPOINT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Como proyecto de grado, el presente trabajo consiste en una revisión teórica de los conceptos de liderazgo, poder e influencia, junto con las posibles relaciones que entre ellos se pueden presentar. Para ello, cada concepto es definido de manera individual, y con base en ello, se identifica la dependencia que tienen estos conceptos entre sí y la importancia dentro del desarrollo del liderazgo transformacional actual. Para lograr lo propuesto, se llevó a cabo la revisión de una parte de la literatura académica presente en libros, revistas académicas, bases de datos y documentos relacionados con los temas y conceptos tratados. A partir de ello, se entendió la evolución del concepto del liderazgo y los enfoques presentados desde la década de los 1920´s hasta la actualidad, junto con el modelo de rango total y el tipo transaccional y transformacional del liderazgo, para luego definir el papel y la importancia de los conceptos de poder, los tipos de poder, la influencia y las tácticas de influencia, y así, identificar las posibles relaciones que se presentan entre los conceptos y la importancia de estos en el entorno organizacional actual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous research has shown that often there is clear inertia in individual decision making---that is, a tendency for decision makers to choose a status quo option. I conduct a laboratory experiment to investigate two potential determinants of inertia in uncertain environments: (i) regret aversion and (ii) ambiguity-driven indecisiveness. I use a between-subjects design with varying conditions to identify the effects of these two mechanisms on choice behavior. In each condition, participants choose between two simple real gambles, one of which is the status quo option. I find that inertia is quite large and that both mechanisms are equally important.