977 resultados para Problem Resolution
Resumo:
In this paper we investigate the problem of cache resolution in a mobile peer to peer ad hoc network. In our vision cache resolution should satisfy the following requirements: (i) it should result in low message overhead and (ii) the information should be retrieved with minimum delay. In this paper, we show that these goals can be achieved by splitting the one hop neighbours in to two sets based on the transmission range. The proposed approach reduces the number of messages flooded in to the network to find the requested data. This scheme is fully distributed and comes at very low cost in terms of cache overhead. The experimental results gives a promising result based on the metrics of studies.
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
We deal with the numerical solution of heat conduction problems featuring steep gradients. In order to solve the associated partial differential equation a finite volume technique is used and unstructured grids are employed. A discrete maximum principle for triangulations of a Delaunay type is developed. To capture thin boundary layers incorporating steep gradients an anisotropic mesh adaptation technique is implemented. Computational tests are performed for an academic problem where the exact solution is known as well as for a real world problem of a computer simulation of the thermoregulation of premature infants.
Resumo:
The accurate prediction of the biochemical function of a protein is becoming increasingly important, given the unprecedented growth of both structural and sequence databanks. Consequently, computational methods are required to analyse such data in an automated manner to ensure genomes are annotated accurately. Protein structure prediction methods, for example, are capable of generating approximate structural models on a genome-wide scale. However, the detection of functionally important regions in such crude models, as well as structural genomics targets, remains an extremely important problem. The method described in the current study, MetSite, represents a fully automatic approach for the detection of metal-binding residue clusters applicable to protein models of moderate quality. The method involves using sequence profile information in combination with approximate structural data. Several neural network classifiers are shown to be able to distinguish metal sites from non-sites with a mean accuracy of 94.5%. The method was demonstrated to identify metal-binding sites correctly in LiveBench targets where no obvious metal-binding sequence motifs were detectable using InterPro. Accurate detection of metal sites was shown to be feasible for low-resolution predicted structures generated using mGenTHREADER where no side-chain information was available. High-scoring predictions were observed for a recently solved hypothetical protein from Haemophilus influenzae, indicating a putative metal-binding site.
Resumo:
Global climate and weather models tend to produce rainfall that is too light and too regular over the tropical ocean. This is likely because of convective parametrizations, but the problem is not well understood. Here, distributions of precipitation rates are analyzed for high-resolution UK Met Office Unified Model simulations of a 10 day case study over a large tropical domain (∼20°S–20°N and 42°E–180°E). Simulations with 12 km grid length and parametrized convection have too many occurrences of light rain and too few of heavier rain when interpolated onto a 1° grid and compared with Tropical Rainfall Measuring Mission (TRMM) data. In fact, this version of the model appears to have a preferred scale of rainfall around 0.4 mm h−1 (10 mm day−1), unlike observations of tropical rainfall. On the other hand, 4 km grid length simulations with explicit convection produce distributions much more similar to TRMM observations. The apparent preferred scale at lighter rain rates seems to be a feature of the convective parametrization rather than the coarse resolution, as demonstrated by results from 12 km simulations with explicit convection and 40 km simulations with parametrized convection. In fact, coarser resolution models with explicit convection tend to have even more heavy rain than observed. Implications for models using convective parametrizations, including interactions of heating and moistening profiles with larger scales, are discussed. One important implication is that the explicit convection 4 km model has temperature and moisture tendencies that favour transitions in the convective regime. Also, the 12 km parametrized convection model produces a more stable temperature profile at its extreme high-precipitation range, which may reduce the chance of very heavy rainfall. Further study is needed to determine whether unrealistic precipitation distributions are due to some fundamental limitation of convective parametrizations or whether parametrizations can be improved, in order to better simulate these distributions.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
The results of coupled high resolution global models (CGCMs) over South America are discussed. HiGEM1.2 and HadGEM1.2 simulations, with horizontal resolution of ~90 and 135 km, respectively, are compared. Precipitation estimations from CMAP (Climate Prediction Center—Merged Analysis of Precipitation), CPC (Climate Prediction Center) and GPCP (Global Precipitation Climatology Project) are used for validation. HiGEM1.2 and HadGEM1.2 simulated seasonal mean precipitation spatial patterns similar to the CMAP. The positioning and migration of the Intertropical Convergence Zone and of the Pacific and Atlantic subtropical highs are correctly simulated by the models. In HiGEM1.2 and HadGEM1.2, the intensity and locations of the South Atlantic Convergence Zone are in agreement with the observed dataset. The simulated annual cycles are in phase with estimations of rainfall for most of the six regions considered. An important result is that HiGEM1.2 and HadGEM1.2 eliminate a common problem of coarse resolution CGCMs, which is the simulation of a semiannual cycle of precipitation due to the semiannual solar forcing. Comparatively, the use of high resolution in HiGEM1.2 reduces the dry biases in the central part of Brazil during austral winter and spring and in most part of the year over an oceanic box in eastern Uruguay.
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
Broad-scale phylogenetic analyses of the angiosperms and of the Asteridae have failed to confidently resolve relationships among the major lineages of the campanulid Asteridae (i.e., the euasterid II of APG II, 2003). To address this problem we assembled presently available sequences for a core set of 50 taxa, representing the diversity of the four largest lineages (Apiales, Aquifoliales, Asterales, Dipsacales) as well as the smaller ""unplaced"" groups (e.g., Bruniaceae, Paracryphiaceae, Columelliaceae). We constructed four data matrices for phylogenetic analysis: a chloroplast coding matrix (atpB, matK, ndhF, rbcL), a chloroplast non-coding matrix (rps16 intron, trnT-F region, trnV-atpE IGS), a combined chloroplast dataset (all seven chloroplast regions), and a combined genome matrix (seven chloroplast regions plus 18S and 26S rDNA). Bayesian analyses of these datasets using mixed substitution models produced often well-resolved and supported trees. Consistent with more weakly supported results from previous studies, our analyses support the monophyly of the four major clades and the relationships among them. Most importantly, Asterales are inferred to be sister to a clade containing Apiales and Dipsacales. Paracryphiaceae is consistently placed sister to the Dipsacales. However, the exact relationships of Bruniaceae, Columelliaceae, and an Escallonia clade depended upon the dataset. Areas of poor resolution in combined analyses may be partly explained by conflict between the coding and non-coding data partitions. We discuss the implications of these results for our understanding of campanulid phylogeny and evolution, paying special attention to how our findings bear on character evolution and biogeography in Dipsacales.
Resumo:
In this paper we show how to extend clausal temporal resolution to the ground eventuality fragment of monodic first-order temporal logic, which has recently been introduced by Hodkinson, Wolter and Zakharyaschev. While a finite Hilbert-like axiomatization of complete monodic first order temporal logic was developed by Wolter and Zakharyaschev, we propose a temporal resolution-based proof system which reduces the satisfiability problem for ground eventuality monodic first-order temporal formulae to the satisfiability problem for formulae of classical first-order logic.
Resumo:
We investigate and solve in the context of general relativity the apparent paradox which appears when bodies floating in a background fluid are set in relativistic motion. Suppose some macroscopic body, say, a submarine designed to lie just in equilibrium when it rests (totally) immersed in a certain background fluid. The puzzle arises when different observers are asked to describe what is expected to happen when the submarine is given some high velocity parallel to the direction of the fluid surface. on the one hand, according to observers at rest with the fluid, the submarine would contract and, thus, sink as a consequence of the density increase. on the other hand, mariners at rest with the submarine using an analogous reasoning for the fluid elements would reach the opposite conclusion. The general relativistic extension of the Archimedes law for moving bodies shows that the submarine sinks. As an extra bonus, this problem suggests a new gedankenexperiment for the generalized second law of thermodynamics.
Resumo:
One of the main goals of the pest control is to maintain the density of the pest population in the equilibrium level below economic damages. For reaching this goal, the optimal pest control problem was divided in two parts. In the first part, the two optimal control functions were considered. These functions move the ecosystem pest-natural enemy at an equilibrium state below the economic injury level. In the second part, the one optimal control function stabilizes the ecosystem in this level, minimizing the functional that characterizes quadratic deviations of this level. The first problem was resolved through the application of the Maximum Principle of Pontryagin. The Dynamic Programming was used for the resolution of the second optimal pest control problem.
Resumo:
The transmission network planning problem is a non-linear integer mixed programming problem (NLIMP). Most of the algorithms used to solve this problem use a linear programming subroutine (LP) to solve LP problems resulting from planning algorithms. Sometimes the resolution of these LPs represents a major computational effort. The particularity of these LPs in the optimal solution is that only some inequality constraints are binding. This task transforms the LP into an equivalent problem with only one equality constraint (the power flow equation) and many inequality constraints, and uses a dual simplex algorithm and a relaxation strategy to solve the LPs. The optimisation process is started with only one equality constraint and, in each step, the most unfeasible constraint is added. The logic used is similar to a proposal for electric systems operation planning. The results show a higher performance of the algorithm when compared to primal simplex methods.
Resumo:
Precipitation radar on board the TRMM satellite was a milestone in the rainfall observation capability in the large scale. Stemming from TRMM the new mission GPM (Global Precipitation Measurement) is to overcome some TRMM shortcomings like the high level of the PR MDZ. However, for major problems like the PR horizontal resolution, significant improvements are not foreseeable. This papers investigates the impact of TRMM PR resolution on the structure of tropical rainfall. The issue is approached by both gradient analysis and texture verification. Results indicate that the impact maybe significant, affecting important applications like in NWP. © 2005 IEEE.
Resumo:
Aim: To report a possible case of tremor fluoxetine-induced treated as Parkinson’s disease in an elderly female patient noncompliant with the pharmacotherapy, with uncontrolled hypertension and using fluoxetine to treat depression. Presentation of Case: Patient complained of sleepiness in the morning, agitation, anxiety, insomnia and mental confusion. Her greatest concern was about bilateral hand tremors which, in her view became, worse after biperiden was prescribed. Therefore, she stopped taking it. The initial medication was: omeprazole, losartan, biperiden, fluoxetine, atenolol + chlorthalidone, acetylsalicylic acid, atorvastatin and diazepam. Pharmacotherapeutic follow up was performed in order to check the necessity, safety and effectiveness of treatment. Discussion: During the analysis of pharmacotherapy, the patient showed uncontrolled blood pressure and had difficulty complying with the treatment. Thus, in view of the complaints expressed by the patient, our first hypothesis was a possible serotonin syndrome related to fluoxetine use. We proposed a change in the fluoxetine regime and discontinuation of biperiden. As tremors persisted, we suggested the replacement of fluoxetine by sertraline, since a possible tremor fluoxetine-induced could explain the complaint. This approach solved the drug-related problem identified. Conclusion: Tremors reported by the patient was identified as an iatrogenic event related to fluoxetine, which was solved by management of serotonin-reuptake inhibitors.