1000 resultados para LR-WPAN


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com o objetivo de estudar os efeitos da calagem e da adubação fosfatada sobre o crescimento de plantas de soja e capim-marmelada e, o reflexo destas práticas nas relações de interferência entre a planta daninha e a planta cultivada. O estudo foi conduzido em casa-devegetação por um período de 49 dias. Utilizaramse vasos de 4 litros, contendo substrato retirado de um LR-distrófico. O delineamento experimental adotado foi o inteiramente casualizado, com quatro repetições. Os tratamentos foram dispostos em um esquema fatorial 2x3x4, onde tinha-se: dois níveis de calagem (presença e ausência), três condições de vegetação nos vasos (soja cultivada isolada, planta daninha cultivada isolada e a convivência das espécies) e quatro doses de aplicação de fósforo (0, 50,100 e 200 ppm) no substrato. A calagem incrementou a altura, o número de trifólios, os teores de clorofila a e b, a biomassa seca e a área foliar das plantas de soja e, proporcionou decréscimos na altura, no número de perfilhos e no acúmulo de matéria seca de plantas de capim-marmelada. A interferência imposta pelo capim-marmelada reduziu a altura de plantas, o número de trifólios , os teores de clorofila a, o acúmulo de matéria seca e a área foliar das plantasde soja. De forma oposta, a competição imposta pela soja determinou decréscimos na altura, no número de perfilhos, nos teores de clorofila a e b, no acúmulo de matéria seca e na área foliar das plantas de capim-marmelada. Já, a adubação fosfatada incrementou a altura, o número de perfilhos, o acúmulo de matéria seca e a área foliar das plantas de capim-marmelada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com o objetivo de avaliar o potencial de lixiviação do herbicida imazaquin em colunas de solo, após a aplicação de diferentes níveis de calagem em amostras de Latossolo Vermelho Distrófico (LV) (arenoso) e Latossolo Roxo Distroférrico (LR) (argiloso), foram conduzidos dois ensaios em casa de vegetação, no período de março a dezembro de 1999. Os ensaios consistiram da aplicação de imazaquin (150 g ha-1) no topo das colunas, cujos solos, após receberem os diferentes níveis de calagem, apresentavam diferentes valores de pH. A seguir, simulou-se uma chuva de 30 mm (LV) ou 90 mm (LR) no topo das colunas. Três dias após, foram instalados bioensaios com pepino (LV), lentilha e sorgo (LR), distribuindo-se as sementes em sulco ao longo das colunas, para avaliar a lixiviação do imazaquin. Os resultados evidenciaram que o incremento no nível de calagem proporcionou significativo aumento do potencial de lixiviação de imazaquin nas colunas de ambos os solos. Constatou-se também que o sorgo foi mais sensível que a lentilha como bioindicador da atividade de imazaquin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Toxic cyanobacteria are common in Portuguese freshwaters and the most common toxins are microcystins. The occurrence of microcystin-LR (MCYST-LR) has been reported since 1990 and a significant number of water reservoirs that are used for drinking water attain high levels of this toxin. Aquatic animals that live in eutrophic freshwater ecosystems may be killed by microcystins but in many cases the toxicity is sublethal and so the animals can survive long enough to accumulate the toxins and transfer them along the food chain. Among these, edible mollusks, fish and crayfish are especially important because they are harvested and sold for human consumption. Mussels that live in estuarine waters and rivers where toxic blooms occur may accumulate toxins without many significant acute toxic effects. In this study data are presented in order to understand the dynamics of the accumulation and depuration of MCYST-LR in mussels. The toxin is readily accumulated and persists in the shellfish for several days after contact. In the crayfish the toxin is accumulated mainly in the gut but is also cleared very slowly. In carps, although the levels of the toxins found in naturally caught specimens were not very high, some toxin was found in the muscle and not only in the viscera. This raises the problem of the toxin accumulation by fish and possible transfer through the food chain. The data gathered from these experiments and from naturally caught specimens are analyzed in terms of risk for human consumption. The occurrence of microcystins in tap water and the incidence of toxic cyanobacteria in fresh water beaches in Portugal are reported. The Portuguese National Monitoring Program of cyanobacteria is mentioned and its implications are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Det övergripande syftet med avhandlingen är att skapa en förståelse för begreppet lärande som politiskt, vuxenpedagogiskt men dock främst individuellt begrepp så som det framträder i enskilda livsberättelser. Forskningsfrågorna är formulerade: Hur beskrivs lärande i (politisk) ekonomiska strategier Hur beskrivs lärande i vuxenpedagogisk teori Hur framträder lärande i den enskilda individens livsberättelse Forskningsansatsen är hermeneutiskt narrativ och metoden narrativ. Via tre individuella livsberättelser önskar jag se om och hur det enskilda lärandet speglas i den stora helheten bestående av (politiska) ekonomiska strategier kring lärande. Min strävan är även att med hjälp av livsberättelserna skapa en större, gemensam berättelse om livslångt lärande för den enskilda individen. Resultaten från undersökningen visar individens lärande som process, som förändring och utveckling. Det lärande som beskrivs i de ekonomiska strategierna speglas förvisso i de individuella livsberättelserna, speglingens riktning är dock inte given. Sker denna från det stora till det lilla eller förhåller det sig tvärtom? Undersökningen kom att visa inte enbart lärande som livslångt, utan ett lärande som sker genom och i det att man lever sitt liv. Den mänskliga strävan är att utvecklas och förändras och att leva sitt liv på bästa sätt. Det är livet som lär oss.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report introduces the ENPI project called “EMIR - Exploitation of Municipal and Industrial Residues” which was executed in a co-operation between Lappeenranta University of Technology (LUT), Saint Petersburg State University of Economics (SPbSUE), Saint Petersburg State Technical University of Plant Polymers (SPbSTUPP) and industrial partners from both Leningrad Region (LR), Russia and Finland. The main targets of the research were to identify the possibilities for deinking sludge management scenarios in co-operation with partner companies, to compare the sustainability of the alternatives, and to provide recommendations for the companies in the Leningrad Region on how to best manage deinking sludge. During the literature review, 24 deinking sludge utilization possibilities were identified, the majority falling under material recovery. Furthermore, 11 potential utilizers of deinking sludge were found within the search area determined by the transportation cost. Each potential utilizer was directly contacted in order to establish cooperation for deinking sludge utilization. Finally, four companies, namely, “Finnsementti” – a cement plant in Finland (S1), “St.Gobian Weber” – a light-weight aggregate plant in Finland (S2), “LSR-Cement” – a cement plant in LR (S3), and “Rockwool” – a stone wool plant in LR (S4) were seen as the most promising partners and were included in the economic and environmental assessments. Economic assessment using cost-benefit analysis (CBA) indicated that substitution of heavy fuel oil with dry deinking sludge in S2 was the most feasible option with a benefit/cost ratio (BCR) of 3.6 when all the sludge was utilized. At the same time, the use of 15% of the total sludge amount (the amount that could potentially be treated in the scenario) resulted in a BCR of only 0.16. The use of dry deinking sludge in the production of cement (S3) is a slightly more feasible option with a BCR of 1.1. The use of sludge in stone wool production is feasible only when all the deinking sludge is used and burned in an existing incineration plant. The least economically feasible utilization possibility is the use of sludge in cement production in Finland (S1) due to the high gate fee charged. Environmental assessment was performed applying internationally recognized life cycle assessment (LCA) methodologies: ISO 14040 and ISO 14044. The results of a consequential LCA stated that only S1 and S2 lead to a reduction of all environmental impacts within the impact categories chosen compared to the baseline scenario where deinking sludge is landfilled. Considering S1, the largest reduction of 13% was achieved for the global warming potential (GWP), whereas for S2, the largest decrease of abiotic depletion potential (ADP) was by 1.7%, the eutrophication potential (EP) by 1.8%, and a GWP of 2.1% was documented. In S3, the most notable increase of ADP and acidification potential (AP) by 2.6 and 1.5% was indicated, while the GWP was reduced by 12%, the largest out of all the impact categories. In S4, ADP and AP increased by 2.3 and 2.1% respectively, whereas ODP was reduced by 25%. During LCA, it was noticed that substitution of fuels causes a greater reduction of environmental impact (S1 and S2) than substitution of raw materials (S3 and S4). Despite a number of economically and environmentally acceptable deinking sludge utilization methods being assessed in the research, evaluation of bottlenecks and communications with companies’ representatives uncovered the fact that the availability of the raw materials consumed, and the risks associated with technological problems resulting from the sludge utilization, limited the willingness of industrial partners to start deinking sludge utilization. The research results are of high value for decision-makers at already existing paper mills since the result provide insights regarding alternatives to the deinking sludge utilization possibilities already applied. Thus, the research results support the maximum economic and environmental value recovery from waste paper utilization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 2003, prostate cancer (PCa) is estimated to be the most commonly diagnosed cancer and third leading cause of cancer death in Canada. During PCa population screening, approximately 25% of patients with a normal digital rectal examination (DRE) and intermediate serum prostate specific antigen (PSA) level have PCa. Since all patients typically undergo biopsy, it is expected that approximately 75% of these procedures are unnecessary. The purpose of this study was to compare the degree of efficacy of clinical tests and algorithms in stage II screening for PCa while preventing unnecessary biopsies from occurring. The sample consisted of 201 consecutive men who were suspected of PCa based on the results of a DRE and serum PSA. These men were referred for venipuncture and transrectal ultrasound (TRUS). Clinical tests included TRUS, agespecific reference range PSA (Age-PSA), prostate specific antigen density (PSAD), and free-to-total prostate specific antigen ratio (%fPSA). Clinical results were evaluated individually and within algorithms. Cutoffs of 0.12 and 0.15 ng/ml/cc were employed for PSAD. Cutoffs that would provide a minimum sensitivity of 0.90 and 0.95, respectively were utilized for %fPSA. Statistical analysis included ROC curve analysis, calculated sensitivity (Sens), specificity (Spec), and positive likelihood ratio (LR), with corresponding confidence intervals (Cl). The %fPSA, at a 23% cutoff ({ Sens=0.92; CI, 0.06}, {Spec=0.4l; CI, 0.09}, {LR=1.56; CI, O.ll}), proved to be the most efficacious independent clinical test. The combination of PSAD (cutoff 0.15 ng/ml/cc) and %fPSA (cutoff 23%) ({Sens=0.93; CI, 0.06}, {Spec=0.38; CI, 0.08}, {LR=1.50; CI, 0.10}) was the most efficacious clinical algorithm. This study advocates the use of %fPSA at a cutoff of 23% when screening patients with an intermediate serum PSA and benign DRE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores the cognitive functions of the Reality Status Evaluation (RSE) system in our experiences of narrative mediated messages (NMM) (fictional, narrative, audio-visual one-way input and moving picture messages), such as fictional TV programs and films. We regard reality in mediated experiences as a special mental and emotional construction and a multi-dimensional concept. We argue that viewers' reality sense in NMM is influenced by many factors with "real - on" as the default value. Some of these factors function as primary mental processes, including the content realism factors of those messages such as Factuality (F), Social Realism (SR), Life Relevance (LR), and Perceptual Realism - involvement (PR), which would have direct impacts on reality evaluations. Other factors, such as Narrative Meaning (NM), Emotional Responses, and personality trait Absorption (AB), will influence the reality evaluations directly or through the mediations of these main dimensions. I designed a questionnaire to study this theoretical construction. I developed items to form scales and sub-scales measuring viewers' subjective experiences of reality evaluations and these factors. Pertinent statistical techniques, such as internal consistency and factorial analysis, were employed to make revisions and improve the quality of the questionnaire. In the formal experiment, after viewing two short films, which were selected as high or low narrative structure messages from previous experiments, participants were required to answer the questionnaire, Absorption questionnaire, and SAM (Self-Assessment Manikin, measuring immediate emotional responses). Results were analyzed using the EQS, structural equation modeling (SEM), and discussed in terms oflatent relations among these subjective factors in mediated experience. The present results supported most of my theoretical hypotheses. In NMM, three main jactors, or dimensions, could be extracted in viewers' subjective reality evaluations: Social Realism (combining with Factuality), Life Relevance and Perceptual Realism. I designed two ways to assess viewers' understanding of na"ative meanings in mediated messages, questionnaire (NM-Q) and rating (NM-R) measurement, and its significant influences on reality evaluations was supported in the final EQS models. Particularly in high story stnlcture messages, the effect of Narrative Meaning (NM) can rarely be explained by only these dimensions of reality evaluations. Also, Empathy seems to playa more important role in RSE of low story structure messages. Also, I focused on two other factors that were pertinent to RSE in NMM, the personality trait Absorption, and Emotional Responses (including two dimensions: Valence and Intensity). Final model results partly supported my theoretical hypotheses about the relationships among Absorption (AB), Social Realism (SR) and Life Relevance (LR); and the immediate impact of Emotional Responses on Perceptual Realism cPR).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The BRAD group is composed of/ Le groupe BRAD est composé de : Sylvie Belleville, Gina Bravo, Louise Demers, Philippe Landreville, Louisette Mercier, Nicole Paquet, Hélène Payette, Constant Rainville, Bernadette Ska and René Verreault.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Avec la hausse mondiale de la fréquence des floraisons de cyanobactéries (CB), dont certaines produisent des cyanotoxines (CT), le développement d’une méthode de détection/quantification rapide d’un maximum de CT s’impose. Cette méthode permettrait de faire un suivi quotidien de la toxicité de plans d’eau contaminés par des CB et ainsi d’émettre rapidement des avis d’alerte appropriés afin de protéger la santé publique. Une nouvelle technologie utilisant la désorption thermique induite par diode laser (LDTD) couplée à l’ionisation chimique sous pression atmosphérique (APCI) et reliée à la spectrométrie de masse en tandem (MS/MS) a déjà fait ses preuves avec des temps d'analyse de l’ordre de quelques secondes. Les analytes sont désorbés par la LDTD, ionisés en phase gazeuse par APCI et détectés par la MS/MS. Il n’y a donc pas de séparation chromatographique, et la préparation de l’échantillon avant l’analyse est minimale selon la complexité de la matrice contenant les analytes. Parmi les quatre CT testées (microcystine-LR, cylindrospermopsine, saxitoxine et anatoxine-a (ANA-a)), seule l’ANA-a a généré une désorption significative nécessaire au développement d’une méthode analytique avec l’interface LDTD-APCI. La forte polarité ou le poids moléculaire élevé des autres CT empêche probablement leur désorption. L’optimisation des paramètres instrumentaux, tout en tenant compte de l’interférence isobarique de l’acide aminé phénylalanine (PHE) lors de la détection de l’ANA-a par MS/MS, a généré une limite de détection d’ANA-a de l’ordre de 1 ug/L. Celle-ci a été évaluée à partir d’une matrice apparentée à une matrice réelle, démontrant qu’il serait possible d’utiliser la LDTD pour effectuer le suivi de l’ANA-a dans les eaux naturelles selon les normes environnementales applicables (1 à 12 ug/L). Il a été possible d’éviter l’interférence isobarique de la PHE en raison de sa très faible désorption avec l’interface LDTD-APCI. En effet, il a été démontré qu’une concentration aussi élevée que 500 ug/L de PHE ne causait aucune interférence sur le signal de l’ANA-a.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Depuis quelques années, la recherche dans le domaine des réseaux maillés sans fil ("Wireless Mesh Network (WMN)" en anglais) suscite un grand intérêt auprès de la communauté des chercheurs en télécommunications. Ceci est dû aux nombreux avantages que la technologie WMN offre, telles que l'installation facile et peu coûteuse, la connectivité fiable et l'interopérabilité flexible avec d'autres réseaux existants (réseaux Wi-Fi, réseaux WiMax, réseaux cellulaires, réseaux de capteurs, etc.). Cependant, plusieurs problèmes restent encore à résoudre comme le passage à l'échelle, la sécurité, la qualité de service (QdS), la gestion des ressources, etc. Ces problèmes persistent pour les WMNs, d'autant plus que le nombre des utilisateurs va en se multipliant. Il faut donc penser à améliorer les protocoles existants ou à en concevoir de nouveaux. L'objectif de notre recherche est de résoudre certaines des limitations rencontrées à l'heure actuelle dans les WMNs et d'améliorer la QdS des applications multimédia temps-réel (par exemple, la voix). Le travail de recherche de cette thèse sera divisé essentiellement en trois principaux volets: le contrôle d‟admission du trafic, la différentiation du trafic et la réaffectation adaptative des canaux lors de la présence du trafic en relève ("handoff" en anglais). Dans le premier volet, nous proposons un mécanisme distribué de contrôle d'admission se basant sur le concept des cliques (une clique correspond à un sous-ensemble de liens logiques qui interfèrent les uns avec les autres) dans un réseau à multiples-sauts, multiples-radios et multiples-canaux, appelé RCAC. Nous proposons en particulier un modèle analytique qui calcule le ratio approprié d'admission du trafic et qui garantit une probabilité de perte de paquets dans le réseau n'excédant pas un seuil prédéfini. Le mécanisme RCAC permet d‟assurer la QdS requise pour les flux entrants, sans dégrader la QdS des flux existants. Il permet aussi d‟assurer la QdS en termes de longueur du délai de bout en bout pour les divers flux. Le deuxième volet traite de la différentiation de services dans le protocole IEEE 802.11s afin de permettre une meilleure QdS, notamment pour les applications avec des contraintes temporelles (par exemple, voix, visioconférence). À cet égard, nous proposons un mécanisme d'ajustement de tranches de temps ("time-slots"), selon la classe de service, ED-MDA (Enhanced Differentiated-Mesh Deterministic Access), combiné à un algorithme efficace de contrôle d'admission EAC (Efficient Admission Control), afin de permettre une utilisation élevée et efficace des ressources. Le mécanisme EAC prend en compte le trafic en relève et lui attribue une priorité supérieure par rapport au nouveau trafic pour minimiser les interruptions de communications en cours. Dans le troisième volet, nous nous intéressons à minimiser le surcoût et le délai de re-routage des utilisateurs mobiles et/ou des applications multimédia en réaffectant les canaux dans les WMNs à Multiples-Radios (MR-WMNs). En premier lieu, nous proposons un modèle d'optimisation qui maximise le débit, améliore l'équité entre utilisateurs et minimise le surcoût dû à la relève des appels. Ce modèle a été résolu par le logiciel CPLEX pour un nombre limité de noeuds. En second lieu, nous élaborons des heuristiques/méta-heuristiques centralisées pour permettre de résoudre ce modèle pour des réseaux de taille réelle. Finalement, nous proposons un algorithme pour réaffecter en temps-réel et de façon prudente les canaux aux interfaces. Cet algorithme a pour objectif de minimiser le surcoût et le délai du re-routage spécialement du trafic dynamique généré par les appels en relève. Ensuite, ce mécanisme est amélioré en prenant en compte l‟équilibrage de la charge entre cliques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a detailed attempt has been made to understand the general hydrography of the upper 300m of the water column, in the eastern Arabian Sea and the western Bay of Bengal, the two contrasting basins in the northern Indian Ocean, using recently collected data sets of Marine Research-Living Resources (MR-LR) assessment programme, funded by Department of Ocean Development, from various cruises, pertaining to different seasons. Initially it discuss the general hydrography of the west and east coasts of India are covered, in the context of mixed layer processes. The study describes the materials and methods . To compare the hydrography of the AS and BOB, a unique MLD(Mixed Layer Depth) definition for AS and BOB is essential, for which the 275 CTD profiles were used. A comparison has been made among the various MLD criteria with the actual MLD. The monthly evolution of MLD, barrier layer thickness and the role of atmospheric forcing on the dynamics of the mixed layer in the AS and BOB were studied. The general hydrography along the west coast of India is described. The upwelling/downwelling, winter cooling processes, in the context of chemical and biological parameters, are also addressed. Finally the general hydrography of the Bay of Bengal is covered. The most striking feature in the hydrography are the signature of an anticyclonic subtropical gyre during spring intermonsoon and a cold core eddy during winter monsoon. The TTS(Typical Tropical Structure) of the euphotic layer was also investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images