913 resultados para Keyed One-Way Functions
Resumo:
Regional climate models are becoming increasingly popular to provide high resolution climate change information for impacts assessments to inform adaptation options. Many countries and provinces requiring these assessments are as small as 200,000 km2 in size, significantly smaller than an ideal domain needed for successful applications of one-way nested regional climate models. Therefore assessments on sub-regional scales (e.g., river basins) are generally carried out using climate change simulations performed for relatively larger regions. Here we show that the seasonal mean hydrological cycle and the day-to-day precipitation variations of a sub-region within the model domain are sensitive to the domain size, even though the large scale circulation features over the region are largely insensitive. On seasonal timescales, the relatively smaller domains intensify the hydrological cycle by increasing the net transport of moisture into the study region and thereby enhancing the precipitation and local recycling of moisture. On daily timescales, the simulations run over smaller domains produce higher number of moderate precipitation days in the sub-region relative to the corresponding larger domain simulations. An assessment of daily variations of water vapor and the vertical velocity within the sub-region indicates that the smaller domains may favor more frequent moderate uplifting and subsequent precipitation in the region. The results remained largely insensitive to the horizontal resolution of the model, indicating the robustness of the domain size influence on the regional model solutions. These domain size dependent precipitation characteristics have the potential to add one more level of uncertainty to the downscaled projections.
Resumo:
This thesis is divided in to 9 chapters and deals with the modification of TiO2 for various applications include photocatalysis, thermal reaction, photovoltaics and non-linear optics. Chapter 1 involves a brief introduction of the topic of study. An introduction to the applications of modified titania systems in various fields are discussed concisely. Scope and objectives of the present work are also discussed in this chapter. Chapter 2 explains the strategy adopted for the synthesis of metal, nonmetal co-doped TiO2 systems. Hydrothermal technique was employed for the preparation of the co-doped TiO2 system, where Ti[OCH(CH3)2]4, urea and metal nitrates were used as the sources for TiO2, N and metals respectively. In all the co-doped systems, urea to Ti[OCH(CH3)2]4 was taken in a 1:1 molar ratio and varied the concentration of metals. Five different co-doped catalytic systems and for each catalysts, three versions were prepared by varying the concentration of metals. A brief explanation of physico-chemical techniques used for the characterization of the material was also presented in this chapter. This includes X-ray Diffraction (XRD), Raman Spectroscopy, FTIR analysis, Thermo Gravimetric Analysis, Energy Dispersive X-ray Analysis (EDX), Scanning Electron Microscopy(SEM), UV-Visible Diffuse Reflectance Spectroscopy (UV-Vis DRS), Transmission Electron Microscopy (TEM), BET Surface Area Measurements and X-ray Photoelectron Spectroscopy (XPS). Chapter 3 contains the results and discussion of characterization techniques used for analyzing the prepared systems. Characterization is an inevitable part of materials research. Determination of physico-chemical properties of the prepared materials using suitable characterization techniques is very crucial to find its exact field of application. It is clear from the XRD pattern that photocatalytically active anatase phase dominates in the calcined samples with peaks at 2θ values around 25.4°, 38°, 48.1°, 55.2° and 62.7° corresponding to (101), (004), (200), (211) and (204) crystal planes (JCPDS 21-1272) respectively. But in the case of Pr-N-Ti sample, a new peak was observed at 2θ = 30.8° corresponding to the (121) plane of the polymorph brookite. There are no visible peaks corresponding to dopants, which may be due to their low concentration or it is an indication of the better dispersion of impurities in the TiO2. Crystallite size of the sample was calculated from Scherrer equation byusing full width at half maximum (FWHM) of the (101) peak of the anatase phase. Crystallite size of all the co-doped TiO2 was found to be lower than that of bare TiO2 which indicates that the doping of metal ions having higher ionic radius into the lattice of TiO2 causes some lattice distortion which suppress the growth of TiO2 nanoparticles. The structural identity of the prepared system obtained from XRD pattern is further confirmed by Raman spectra measurements. Anatase has six Raman active modes. Band gap of the co-doped system was calculated using Kubelka-Munk equation and that was found to be lower than pure TiO2. Stability of the prepared systems was understood from thermo gravimetric analysis. FT-IR was performed to understand the functional groups as well as to study the surface changes occurred during modification. EDX was used to determine the impurities present in the system. The EDX spectra of all the co-doped samples show signals directly related to the dopants. Spectra of all the co-doped systems contain O and Ti as the main components with low concentrations of doped elements. Morphologies of the prepared systems were obtained from SEM and TEM analysis. Average particle size of the systems was drawn from histogram data. Electronic structures of the samples were identified perfectly from XPS measurements. Chapter 4 describes the photocatalytic degradation of herbicides Atrazine and Metolachlor using metal, non-metal co-doped titania systems. The percentage of degradation was analyzed by HPLC technique. Parameters such as effect of different catalysts, effect of time, effect of catalysts amount and reusability studies were discussed. Chapter 5 deals with the photo-oxidation of some anthracene derivatives by co-doped catalytic systems. These anthracene derivatives come underthe category of polycyclic aromatic hydrocarbons (PAH). Due to the presence of stable benzene rings, most of the PAH show strong inhibition towards biological degradation and the common methods employed for their removal. According to environmental protection agency, most of the PAH are highly toxic in nature. TiO2 photochemistry has been extensively investigated as a method for the catalytic conversion of such organic compounds, highlighting the potential of thereof in the green chemistry. There are actually two methods for the removal of pollutants from the ecosystem. Complete mineralization is the one way to remove pollutants. Conversion of toxic compounds to another compound having toxicity less than the initial starting compound is the second way. Here in this chapter, we are concentrating on the second aspect. The catalysts used were Gd(1wt%)-N-Ti, Pd(1wt%)-N-Ti and Ag(1wt%)-N-Ti. Here we were very successfully converted all the PAH to anthraquinone, a compound having diverse applications in industrial as well as medical fields. Substitution of 10th position of desired PAH by phenyl ring reduces the feasibility of photo reaction and produced 9-hydroxy 9-phenyl anthrone (9H9PA) as an intermediate species. The products were separated and purified by column chromatography using 70:30 hexane/DCM mixtures as the mobile phase and the resultant products were characterized thoroughly by 1H NMR, IR spectroscopy and GCMS analysis. Chapter 6 elucidates the heterogeneous Suzuki coupling reaction by Cu/Pd bimetallic supported on TiO2. Sol-Gel followed by impregnation method was adopted for the synthesis of Cu/Pd-TiO2. The prepared system was characterized by XRD, TG-DTG, SEM, EDX, BET Surface area and XPS. The product was separated and purified by column chromatography using hexane as the mobile phase. Maximum isolated yield of biphenyl of around72% was obtained in DMF using Cu(2wt%)-Pd(4wt%)-Ti as the catalyst. In this reaction, effective solvent, base and catalyst were found to be DMF, K2CO3 and Cu(2wt%)-Pd(4wt%)-Ti respectively. Chapter 7 gives an idea about the photovoltaic (PV) applications of TiO2 based thin films. Due to energy crisis, the whole world is looking for a new sustainable energy source. Harnessing solar energy is one of the most promising ways to tackle this issue. The present dominant photovoltaic (PV) technologies are based on inorganic materials. But the high material, low power conversion efficiency and manufacturing cost limits its popularization. A lot of research has been conducted towards the development of low-cost PV technologies, of which organic photovoltaic (OPV) devices are one of the promising. Here two TiO2 thin films having different thickness were prepared by spin coating technique. The prepared films were characterized by XRD, AFM and conductivity measurements. The thickness of the films was measured by Stylus Profiler. This chapter mainly concentrated on the fabrication of an inverted hetero junction solar cell using conducting polymer MEH-PPV as photo active layer. Here TiO2 was used as the electron transport layer. Thin films of MEH-PPV were also prepared using spin coating technique. Two fullerene derivatives such as PCBM and ICBA were introduced into the device in order to improve the power conversion efficiency. Effective charge transfer between the conducting polymer and ICBA were understood from fluorescence quenching studies. The fabricated Inverted hetero junction exhibited maximum power conversion efficiency of 0.22% with ICBA as the acceptor molecule. Chapter 8 narrates the third order order nonlinear optical properties of bare and noble metal modified TiO2 thin films. Thin films were fabricatedby spray pyrolysis technique. Sol-Gel derived Ti[OCH(CH3)2]4 in CH3CH2OH/CH3COOH was used as the precursor for TiO2. The precursors used for Au, Ag and Pd were the aqueous solutions of HAuCl4, AgNO3 and Pd(NO3)2 respectively. The prepared films were characterized by XRD, SEM and EDX. The nonlinear optical properties of the prepared materials were investigated by Z-Scan technique comprising of Nd-YAG laser (532 nm,7 ns and10 Hz). The non-linear coefficients were obtained by fitting the experimental Z-Scan plot with the theoretical plots. Nonlinear absorption is a phenomenon defined as a nonlinear change (increase or decrease) in absorption with increasing of intensity. This can be mainly divided into two types: saturable absorption (SA) and reverse saturable absorption (RSA). Depending on the pump intensity and on the absorption cross- section at the excitation wavelength, most molecules show non- linear absorption. With increasing intensity, if the excited states show saturation owing to their long lifetimes, the transmission will show SA characteristics. Here absorption decreases with increase of intensity. If, however, the excited state has strong absorption compared with that of the ground state, the transmission will show RSA characteristics. Here in our work most of the materials show SA behavior and some materials exhibited RSA behavior. Both these properties purely depend on the nature of the materials and alignment of energy states within them. Both these SA and RSA have got immense applications in electronic devices. The important results obtained from various studies are presented in chapter 9.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Restarting automata are a restricted model of computation that was introduced by Jancar et.al. to model the so-called analysis by reduction. A computation of a restarting automaton consists of a sequence of cycles such that in each cycle the automaton performs exactly one rewrite step, which replaces a small part of the tape content by another, even shorter word. Thus, each language accepted by a restarting automaton belongs to the complexity class $CSL cap NP$. Here we consider a natural generalization of this model, called shrinking restarting automaton, where we do no longer insist on the requirement that each rewrite step decreases the length of the tape content. Instead we require that there exists a weight function such that each rewrite step decreases the weight of the tape content with respect to that function. The language accepted by such an automaton still belongs to the complexity class $CSL cap NP$. While it is still unknown whether the two most general types of one-way restarting automata, the RWW-automaton and the RRWW-automaton, differ in their expressive power, we will see that the classes of languages accepted by the shrinking RWW-automaton and the shrinking RRWW-automaton coincide. As a consequence of our proof, it turns out that there exists a reduction by morphisms from the language class $cL(RRWW)$ to the class $cL(RWW)$. Further, we will see that the shrinking restarting automaton is a rather robust model of computation. Finally, we will relate shrinking RRWW-automata to finite-change automata. This will lead to some new insights into the relationships between the classes of languages characterized by (shrinking) restarting automata and some well-known time and space complexity classes.
Resumo:
As the number of resources on the web exceeds by far the number of documents one can track, it becomes increasingly difficult to remain up to date on ones own areas of interest. The problem becomes more severe with the increasing fraction of multimedia data, from which it is difficult to extract some conceptual description of their contents. One way to overcome this problem are social bookmark tools, which are rapidly emerging on the web. In such systems, users are setting up lightweight conceptual structures called folksonomies, and overcome thus the knowledge acquisition bottleneck. As more and more people participate in the effort, the use of a common vocabulary becomes more and more stable. We present an approach for discovering topic-specific trends within folksonomies. It is based on a differential adaptation of the PageRank algorithm to the triadic hypergraph structure of a folksonomy. The approach allows for any kind of data, as it does not rely on the internal structure of the documents. In particular, this allows to consider different data types in the same analysis step. We run experiments on a large-scale real-world snapshot of a social bookmarking system.
Resumo:
In der vorliegenden Dissertation werden Systeme von parallel arbeitenden und miteinander kommunizierenden Restart-Automaten (engl.: systems of parallel communicating restarting automata; abgekürzt PCRA-Systeme) vorgestellt und untersucht. Dabei werden zwei bekannte Konzepte aus den Bereichen Formale Sprachen und Automatentheorie miteinander vescrknüpft: das Modell der Restart-Automaten und die sogenannten PC-Systeme (systems of parallel communicating components). Ein PCRA-System besteht aus endlich vielen Restart-Automaten, welche einerseits parallel und unabhängig voneinander lokale Berechnungen durchführen und andererseits miteinander kommunizieren dürfen. Die Kommunikation erfolgt dabei durch ein festgelegtes Kommunikationsprotokoll, das mithilfe von speziellen Kommunikationszuständen realisiert wird. Ein wesentliches Merkmal hinsichtlich der Kommunikationsstruktur in Systemen von miteinander kooperierenden Komponenten ist, ob die Kommunikation zentralisiert oder nichtzentralisiert erfolgt. Während in einer nichtzentralisierten Kommunikationsstruktur jede Komponente mit jeder anderen Komponente kommunizieren darf, findet jegliche Kommunikation innerhalb einer zentralisierten Kommunikationsstruktur ausschließlich mit einer ausgewählten Master-Komponente statt. Eines der wichtigsten Resultate dieser Arbeit zeigt, dass zentralisierte Systeme und nichtzentralisierte Systeme die gleiche Berechnungsstärke besitzen (das ist im Allgemeinen bei PC-Systemen nicht so). Darüber hinaus bewirkt auch die Verwendung von Multicast- oder Broadcast-Kommunikationsansätzen neben Punkt-zu-Punkt-Kommunikationen keine Erhöhung der Berechnungsstärke. Desweiteren wird die Ausdrucksstärke von PCRA-Systemen untersucht und mit der von PC-Systemen von endlichen Automaten und mit der von Mehrkopfautomaten verglichen. PC-Systeme von endlichen Automaten besitzen bekanntermaßen die gleiche Ausdrucksstärke wie Einwegmehrkopfautomaten und bilden eine untere Schranke für die Ausdrucksstärke von PCRA-Systemen mit Einwegkomponenten. Tatsächlich sind PCRA-Systeme auch dann stärker als PC-Systeme von endlichen Automaten, wenn die Komponenten für sich genommen die gleiche Ausdrucksstärke besitzen, also die regulären Sprachen charakterisieren. Für PCRA-Systeme mit Zweiwegekomponenten werden als untere Schranke die Sprachklassen der Zweiwegemehrkopfautomaten im deterministischen und im nichtdeterministischen Fall gezeigt, welche wiederum den bekannten Komplexitätsklassen L (deterministisch logarithmischer Platz) und NL (nichtdeterministisch logarithmischer Platz) entsprechen. Als obere Schranke wird die Klasse der kontextsensitiven Sprachen gezeigt. Außerdem werden Erweiterungen von Restart-Automaten betrachtet (nonforgetting-Eigenschaft, shrinking-Eigenschaft), welche bei einzelnen Komponenten eine Erhöhung der Berechnungsstärke bewirken, in Systemen jedoch deren Stärke nicht erhöhen. Die von PCRA-Systemen charakterisierten Sprachklassen sind unter diversen Sprachoperationen abgeschlossen und einige Sprachklassen sind sogar abstrakte Sprachfamilien (sogenannte AFL's). Abschließend werden für PCRA-Systeme spezifische Probleme auf ihre Entscheidbarkeit hin untersucht. Es wird gezeigt, dass Leerheit, Universalität, Inklusion, Gleichheit und Endlichkeit bereits für Systeme mit zwei Restart-Automaten des schwächsten Typs nicht semientscheidbar sind. Für das Wortproblem wird gezeigt, dass es im deterministischen Fall in quadratischer Zeit und im nichtdeterministischen Fall in exponentieller Zeit entscheidbar ist.
Resumo:
In dieser Arbeit wird ein Verfahren zum Einsatz neuronaler Netzwerke vorgestellt, das auf iterative Weise Klassifikation und Prognoseschritte mit dem Ziel kombiniert, bessere Ergebnisse der Prognose im Vergleich zu einer einmaligen hintereinander Ausführung dieser Schritte zu erreichen. Dieses Verfahren wird am Beispiel der Prognose der Windstromerzeugung abhängig von der Wettersituation erörtert. Eine Verbesserung wird in diesem Rahmen mit einzelnen Ausreißern erreicht. Verschiedene Aspekte werden in drei Kapiteln diskutiert: In Kapitel 1 werden die verwendeten Daten und ihre elektronische Verarbeitung vorgestellt. Die Daten bestehen zum einen aus Windleistungshochrechnungen für die Bundesrepublik Deutschland der Jahre 2011 und 2012, welche als Transparenzanforderung des Erneuerbaren Energiegesetzes durch die Übertragungsnetzbetreiber publiziert werden müssen. Zum anderen werden Wetterprognosen, die der Deutsche Wetterdienst im Rahmen der Grundversorgung kostenlos bereitstellt, verwendet. Kapitel 2 erläutert zwei aus der Literatur bekannte Verfahren - Online- und Batchalgorithmus - zum Training einer selbstorganisierenden Karte. Aus den dargelegten Verfahrenseigenschaften begründet sich die Wahl des Batchverfahrens für die in Kapitel 3 erläuterte Methode. Das in Kapitel 3 vorgestellte Verfahren hat im modellierten operativen Einsatz den gleichen Ablauf, wie eine Klassifikation mit anschließender klassenspezifischer Prognose. Bei dem Training des Verfahrens wird allerdings iterativ vorgegangen, indem im Anschluss an das Training der klassenspezifischen Prognose ermittelt wird, zu welcher Klasse der Klassifikation ein Eingabedatum gehören sollte, um mit den vorliegenden klassenspezifischen Prognosemodellen die höchste Prognosegüte zu erzielen. Die so gewonnene Einteilung der Eingaben kann genutzt werden, um wiederum eine neue Klassifikationsstufe zu trainieren, deren Klassen eine verbesserte klassenspezifisch Prognose ermöglichen.
Resumo:
Agriculture in semi-arid and arid regions is constantly gaining importance for the security of the nutrition of humankind because of the rapid population growth. At the same time, especially these regions are more and more endangered by soil degradation, limited resources and extreme climatic conditions. One way to retain soil fertility under these conditions in the long run is to increase the soil organic matter. Thus, a two-year field experiment was conducted to test the efficiency of activated charcoal and quebracho tannin extract as stabilizers of soil organic matter on a sandy soil low in nutrients in Northern Oman. Both activated charcoal and quebracho tannin extract were either fed to goats and after defecation applied to the soil or directly applied to the soil in combination with dried goat manure. Regardless of the application method, both additives reduced decomposition of soil-applied organic matter and thus stabilized and increased soil organic carbon. The nutrient release from goat manure was altered by the application of activated charcoal and quebracho tannin extract as well, however, nutrient release was not always slowed down. While activated charcoal fed to goats, was more effective in stabilising soil organic matter and in reducing nutrient release than mixing it, for quebracho tannin extract the opposite was the case. Moreover, the efficiency of the additives was influenced by the cultivated crop (sweet corn and radish), leading to unexplained interactions. The reduced nutrient release caused by the stabilization of the organic matter might be the reason for the reduced yields for sweet corn caused by the application of manure amended with activated charcoal and quebracho tannin extract. Radish, on the other hand, was only inhibited by the presence of quebracho tannin extract but not by activated charcoal. This might be caused by a possible allelopathic effect of tannins on crops. To understand the mechanisms behind the changes in manure, in the soil, in the mineralisation and the plant development and to resolve detrimental effects, further research as recommended in this dissertation is necessary. Particularly in developing countries poor in resources and capital, feeding charcoal or tannins to animals and using their faeces as manure may be promising to increase soil fertility, sequester carbon and reduce nutrient losses, when yield reductions can be resolved.
Resumo:
Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.
Resumo:
The image comparison operation ??sessing how well one image matches another ??rms a critical component of many image analysis systems and models of human visual processing. Two norms used commonly for this purpose are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric better captures the perceptual notion of image similarity than the other. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created via vector quantization. In both conditions the subjects showed a consistent preference for images matched using the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity.
Resumo:
Several eco-toxicological studies have shown that insectivorous mammals, due to their feeding habits, easily accumulate high amounts of pollutants in relation to other mammal species. To assess the bio-accumulation levels of toxic metals and their in°uence on essential metals, we quantified the concentration of 19 elements (Ca, K, Fe, B, P, S, Na, Al, Zn, Ba, Rb, Sr, Cu, Mn, Hg, Cd, Mo, Cr and Pb) in bones of 105 greater white-toothed shrews (Crocidura russula) from a polluted (Ebro Delta) and a control (Medas Islands) area. Since chemical contents of a bio-indicator are mainly compositional data, conventional statistical analyses currently used in eco-toxicology can give misleading results. Therefore, to improve the interpretation of the data obtained, we used statistical techniques for compositional data analysis to define groups of metals and to evaluate the relationships between them, from an inter-population viewpoint. Hypothesis testing on the adequate balance-coordinates allow us to confirm intuition based hypothesis and some previous results. The main statistical goal was to test equal means of balance-coordinates for the two defined populations. After checking normality, one-way ANOVA or Mann-Whitney tests were carried out for the inter-group balances
Resumo:
Objetivo del estudio: comparar las mediciones realizadas en el dispositivo de entrega de agente halogenado, con un modelo matemático donde se encontraron los valores ideales de fracción espirada de agente halogenado para cada evento. Diseño del estudio: diseño de tecnología. Lugar de desarrollo: salas de cirugía de un Hospital Universitario, Bogotá, Colombia. Intervenciones y medidas: luego de diseñar y ensamblar un circuito anestésico semiabierto tipo mascara facial con dos válvulas unidireccionales (una inspitratoria y otra espiratoria), se procedió a tomar las medidas de Concentración inspirada de sevofluorane, y fracción inspirada de oxigeno, con volúmenes espiratorios de 70 a 170 mL con intervalos de 10 mL, y con el dial de sevoflorane a 2, 2.2, 2.4, 2.6 vol%, los resultados consignados fueron comparados con los valores matemáticamente calculados para cada evento. Resultados: las mediciones realizadas en la máscara son concordantes con los cálculos realizados en el modelo matemático. La concentración espirada de sevoflorane es ligeramente superior en cada evento con respecto a lo encontrado en el modelo matemático, no es posible asegurar la no reinhalación con las mediciones realizadas.
Resumo:
Advantage es una empresa de confecciones constituida legalmente en el 2004, la cual se destaca por sus diseños y calidad en sus productos. Gracias a la colaboración obtenida por el gerente de la empresa, el cual facilitó información muy valiosa, se pudo desarrollar el Plan Exportador de la empresa, en el cual después de evaluar una serie de variables se logró identificar el segmento de mercado y tipo de cliente al cual se va a exportar. En este proceso no solo se tuvieron en cuenta, variables relacionadas con la exportación, sino variables que pudieran evaluar cómo se encuentra internamente la empresa. Al obtener los resultados de esta evaluación se sugirieron algunas recomendaciones necesarias para poder comenzar con el Plan Exportador, en el cual se planteará el mercado objetivo con el cual se deberá comenzar, y los pasos a seguir para poder tener una internacionalización exitosa. Hemos desarrollado este trabajo teniendo en cuenta cada factor que pueda afectar de una u otra forma este proceso. La recomendación que consideramos la más importante, es que se concentren en realizar principalmente los cambios internos de la empresa, para que al momento de exportar puedan mostrar no solo ventaja competitiva en diseño y calidad de sus productos, sino que puedan garantizar capacidad productiva y tiempos de entrega adecuados para el mercado objetivo.
Resumo:
Este trabajo hace parte del proyecto de investigación de Thanatos Empresarial, el propósito del tema fue seleccionado con el fin de encontrar un factor o patrón común existente entre las empresas del sector minero colombiano, teniendo en cuenta aquellas que hayan sido liquidadas, estén cursando un proceso de liquidación obligatoria o aun estén desarrollando actividades. El trabajo fue dividido en tres partes principales. En la primera expresamos la problemática que cursa el país por la mortalidad prematura de empresas en el país, problemática que en lugar de disminuir sigue aumentando, pese a que desde hace más de cuatro décadas se han venido creando y modernizando leyes que buscan fomentar la creación de empresas sostenibles y de la misma manera ayudar a aquellas que se vean en la necesidad de entrar en un proceso de liquidación. La segunda parte de la investigación esta nutrida de datos recogidos por entidades, principalmente del estado, durante los últimos 15 años, por medio de los cuales se puede ver detalladamente las cifras de empresas que han venido entrando a procesos de liquidación, concordato, reorganización etc. y como pese a las leyes, muy pocas son las que logran sobrevivir a este proceso. Los resultados obtenidos mediante el análisis de estos datos corroboran de manera precisa esta premisa, las leyes existentes no están siendo suficientes para frenar la mortalidad de empresas en lo que respecta al sector minero nacional. Las empresas no toman decisiones por si solas, al mando de ellas están diferentes personas que como capitanes de un barco dirigen el rumbo de la misma, dependiendo de qué decisiones se tomen desde la dirección y la gerencia la empresa lograra recorrer el camino del éxito o del fracaso. Lamentablemente muchos de los emprendedores colombianos deciden dirigir su compañía sin buscar ayuda de algunos estamentos que fueron creados para ayudarlos y evitar la crisis de las empresas, tales como las cámaras de comercio, y por lo mismo debido a su inexperiencia o al abuso de toma de decisiones arriesgadas terminan por echar por la borda la empresa y poniendo en dificultades a todos los que de una u otra forma interactuaban con esta. Esta investigación cobra importancia cuando aquellas personas que desean crear empresa, buscan informarse con anterioridad sobre los problemas pueden cursar cuando entren en el mercado, encontrando en este trabajo una herramienta valiosísima en la gestión de la crisis la cual hace las veces de guía para evitar repetir el patrón que causa la muerte prematura de muchas empresas en el sector minero y en general en la economía Colombiana.
Resumo:
Desarrollar procedimientos y técnicas integrados en los sistemas de instrucción, para ayudar a modificar y prevenir los problemas de aprendizaje asociados con la impulsividad cognitiva. Comparar la eficacia de los entrenamientos en autoinstrucciones y en solución de problemas a la hora de modificar la impulsividad y mejorar el rendimiento académico del niño. Evaluar los efectos de algunas aportaciones de los sistemas de instrucción en estrategias cognitivas o CSI, respecto a la generalización y mantenimiento de resultadoss programas de intervención no producirán cambios en el comportamiento social del niño. Dos aulas de quinto y dos de sexto de EGB del C. P. Miquel Porcel de Palma, durante el curso académico 90-91. Muestra inicial de 81 alumnos de los cuales se seleccionan 21, 10 de quinto y 11 de sexto, por ser clasificados como impulsivos, con un rendimiento académico moderado-bajo, acompañado de dificultades de aprendizaje no atribuibles directamente a factores neuropsicológicos o socio-familiares. Distribución aleatoria de los sujetos en tres grupos, controlando las variables curso y sexo: 7 para el programa de autoinstrucciones, AI, 7 para el de solución de problemas, SP, y 7 para el grupo control, sin tratamiento. El trabajo se estructura en tres grandes partes. La primera trata aspectos teóricos de la reflexibidad-impulsividad, R-I, desde el ámbito de los estilos cognitivos. La segunda parte establece las relaciones de la R-I con la educación viendo las nuevas posibilidades de conceptualización y modificación dentro de los modelos de instrucción en estrategias o CSI. La tercera parte aplica un diseño experimental para comprobar la efectividad de esta relación entre la perspectiva cognitivo-conductual y la CSI. 1. Test de emparejamiento de figuras familiares de Cairns y Cammock o MFF-20, 1978, para medir la reflexividad-impulsividad. 2. Test de matrices progresivas de Raven -serie especial-, para medir la inteligencia entendida como razonamiento analógico. 3. Prueba objetiva de rendimiento académico o POR, basada en el currículum del centro, para medir la comprensión lectora, la expresión escrita, la resolución de problemas elementales, el cálculo, etc. 4. Cuestionario de evaluación de conductas en el aula o CECA, para medir el comportamiento, las habilidades básicas y el rendimiento académico. Destinado a los profesores. 5. Prueba inicial del rendimiento académico o PIN, destinada únicamente a los grupos experimentales. Para seleccionar la muestra experimental se utilizan las puntuaciones globales de los cuatro primeros instrumentos; para la evaluación pre y post tratamiento se utiliza el primero y los tres últimos; y para la evaluación del seguimiento se utiliza el primero y una versión reducida del tercero y del quinto. Los tres últimos instrumentos son de elaboración propia. Se utiliza el sistema de economía de fichas para la atención y el buen comportamiento durante la aplicación de los tratamientos. La muestra inicial se analiza con los siguientes procedimientos. Diseño factorial AxB con dos niveles en cada factor al que se le aplica el análisis de varianza Two Ways. Análisis de varianza One Way para los cuatro grupos del sistema tradicional, los contrastes posteriores se realizan con la prueba de Student-Newman-Keuls. Comparación de medias con T-Test para la PI. El diseño correlacional analiza las relaciones entre las medidas del MFF-20 y del Ravesn sobre las medidas del POR y CECA; y la fiabilidad del CECA mediante el método test-retest. 2. La muestra experimental se analiza con los siguientes procedimientos: diseño factorial AxB intra-sujetos con medidas repetidas en el factor B. El factor A o Factor grupo está compuesto por tres niveles: AI, SP y CN. El factor B o factor test está compuesto por dos o tres niveles: evaluación pre-tratamiento, evaluación post-tratamiento y, para alguna medida, evaluación de seguimiento. Se ha efectuado el análisis multivariante de la varianza, MANOVA, sobre las distintas medidas; cuando las interacciones grupo X test han sido significativas se ha procedido al análisis de los contrastes, tipo simple. En las medidas donde las interacciones no han sido significativas pero sí lo han sido las variables entre-sujetos, factor grupo, y/o la variabilidad intrasujetos, factor test, se ha procedido a la aplicación de los ANOVA para cada factor y al análisis de los contrastes con la prueba Student-Neyman-Keuls. Se confirman las tres hipótesis de la muestra inicial. Por lo que respecta al diseño experiemental, si bien las H4 y H5 se han cumplido hasta cierto punto, las tres hipótesis exploratorias consideradas de mayor interés sólo se han cumplido parcialmente. Los programas han sido efectivos en la modificación de la R-I y en el mantenimiento de resultados. Sin embargo, los resultados sobre la mejora del rendimiento académico han sido aceptables únicamente sobre las medidas globales de la POR y la PIN pero no se han mantenido tras el seguimiento. Por otra parte, H2 y H3 sólo se han cumplido estadísticamente sobre los errores aunque la tendencia sobre la PI es la misma. De todos modos, en el caso de las latencias el programa de solución de problemas se muestra ligeramente superior al de autoinstrucciones tanto en el postratamiento como en el seguimiento. Si bien todavía quedan bastantes puntos oscuros en la concepción de la R-I, los esfuerzos realizados tanto en los aspectos conceptuales como en los metodológicos han servido para confirmar las repercusiones de ésta dentro del ámbito educativo. La preocupación por incorporar los avances realizados dentro de la orientación del enseñar a pensar es palpable. El problema radica en plantearse no sólo las reformas de los contenidos curriculares, necesarias e importantes, sino también en plantearse las posibilidades de modificar los sistemas de instrucción para que se haga un mayor énfasis en el fomento de los procesos cognitivos.