966 resultados para One-pass scheme
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Naomi Shinomiya Hell was the first researcher to investigate the physiological adaptations to a meal-feeding scheme (MFS) in Brazil. Over a period of 20 years, from 1979 to 1999, Naomi's group determined the physiological and metabolic adaptations induced by this feeding scheme in rats. The group showed the persistence of such adaptations even when MFS is associated with moderate exercise training and the performance to a session of intense physical effort. The metabolic changes induced by the feeding training were discriminated from those caused by the effective fasting period. Naomi made an important contribution to the understanding of the MFS but a lot still has to be done. One crucial question still remains to be satisfactorily answered: what is the ideal control for the MFS?
Resumo:
Selon plusieurs études, il y aurait une certaine association des comportements déviants à travers le temps et à travers les générations. Peu importe l’angle d’analyse, le fait que la délinquance soit liée d’une génération à une autre semble confirmé par plusieurs recherches empiriques. Cela étant dit, cette étude met à l’épreuve le modèle suggérant un lien intergénérationnel entre les comportements délinquants des adolescents et de leurs parents. En utilisant des données longitudinales recueillies auprès de 1037 garçons provenant de quartiers défavorisés d’une grande ville canadienne, nous examinons les comportements violents et les comportements de vol de ces adolescents alors qu’ils étaient âgés entre 11 et 17 ans tout en examinant l’effet du passé criminel de la mère et du père. Par la suite, diverses variables médiatrices familiales telles que la supervision inadéquate des parents, la punition erratique des parents et l’attachement à la famille sont ajoutées aux modèles pour évaluer leur part explicative dans cette association statistique. En réalisant deux modèles multiniveaux paramétriques, soit un pour chaque type de délinquance, les résultats de l’analyse permettent de constater d’une part, qu’un lien est observé entre les comportements violents des garçons et la présence d’un dossier criminel chez la mère et d’autre part, que la criminalité du père n’est pas associée aux comportements délinquants des garçons. Également, bien que la supervision parentale explique légèrement ce lien, les facteurs familiaux inclus dans l’analyse ne parviennent pas à expliquer en totalité cette relation entre la criminalité de la mère et les comportements délinquants de leurs garçons. Enfin, bien que la puissance statistique des données limite partiellement les conclusions générales, nous discutons des implications théoriques de ces résultats.
Resumo:
Avec le développement exponentiel de l'Internet et son corollaire l'expansion du commerce en ligne, le sort de la protection du cyberconsommateur devient un sujet préoccupant en ce 21ième siècle. En effet, dans ce monde virtuel où l'on emploie des méthodes et technologies nouvelles et plus encore des clauses abusives dans les contrats unilatéraux, s'installe indubitablement un sentiment de méfiance entre le cyberconsommateur et le cybercommerçant. Pour rétablir cette confiance et favoriser le commerce par Internet, des lois nationales, internationales et des normes communautaires ont été adoptées aux fins de l'encadrement rigoureux du processus contractuel. Toutefois, en raison de la présence fréquente d'éléments d'extranéité dans les contrats de consommation en ligne, la question fondamentale qui vient tout naturellement à l'esprit de tous ceux qui entreprennent des études aujourd'hui, en la matière, est celle de savoir si les règles classiques de droit international privé sont dépassées par le développement trop rapide de ce type de commerce ou si au contraire elles y sont adaptées. On pourrait en outre se demander si l'encadrement juridique offert au cyberconsommateur est à même de lui procurer le même niveau de protection dont il bénéficie dans le commerce traditionnel. La présente étude tente d'apporter certains éléments de réponse en analysant dans un premier temps, le droit substantiel interne de protection du consommateur dans les systèmes juridiques européen, français, canadien et québécois en vu de scruter des zones de conflits susceptibles d'exister dans le cycle de vie de ce contrat. Dans la seconde partie, elle démontre que les méthodes classiques de résolution des conflits de juridiction et des conflits de lois en droit international privé, bien que nécessitant des adaptations, sont effectivement applicables au contexte de l'internet et ce, dans l'objectif privilégié de la protection du cyberconsommateur. Le bilan de l'analyse et de l'appréciation des critères de ces règles de conflits nous conduiront à l'examen des nouvelles mesures qui s'imposent.
Resumo:
La scoliose idiopathique de l’adolescent (SIA) est une déformation tri-dimensionelle du rachis. Son traitement comprend l’observation, l’utilisation de corsets pour limiter sa progression ou la chirurgie pour corriger la déformation squelettique et cesser sa progression. Le traitement chirurgical reste controversé au niveau des indications, mais aussi de la chirurgie à entreprendre. Malgré la présence de classifications pour guider le traitement de la SIA, une variabilité dans la stratégie opératoire intra et inter-observateur a été décrite dans la littérature. Cette variabilité s’accentue d’autant plus avec l’évolution des techniques chirurgicales et de l’instrumentation disponible. L’avancement de la technologie et son intégration dans le milieu médical a mené à l’utilisation d’algorithmes d’intelligence artificielle informatiques pour aider la classification et l’évaluation tridimensionnelle de la scoliose. Certains algorithmes ont démontré être efficace pour diminuer la variabilité dans la classification de la scoliose et pour guider le traitement. L’objectif général de cette thèse est de développer une application utilisant des outils d’intelligence artificielle pour intégrer les données d’un nouveau patient et les évidences disponibles dans la littérature pour guider le traitement chirurgical de la SIA. Pour cela une revue de la littérature sur les applications existantes dans l’évaluation de la SIA fut entreprise pour rassembler les éléments qui permettraient la mise en place d’une application efficace et acceptée dans le milieu clinique. Cette revue de la littérature nous a permis de réaliser que l’existence de “black box” dans les applications développées est une limitation pour l’intégration clinique ou la justification basée sur les évidence est essentielle. Dans une première étude nous avons développé un arbre décisionnel de classification de la scoliose idiopathique basé sur la classification de Lenke qui est la plus communément utilisée de nos jours mais a été critiquée pour sa complexité et la variabilité inter et intra-observateur. Cet arbre décisionnel a démontré qu’il permet d’augmenter la précision de classification proportionnellement au temps passé à classifier et ce indépendamment du niveau de connaissance sur la SIA. Dans une deuxième étude, un algorithme de stratégies chirurgicales basé sur des règles extraites de la littérature a été développé pour guider les chirurgiens dans la sélection de l’approche et les niveaux de fusion pour la SIA. Lorsque cet algorithme est appliqué à une large base de donnée de 1556 cas de SIA, il est capable de proposer une stratégie opératoire similaire à celle d’un chirurgien expert dans prêt de 70% des cas. Cette étude a confirmé la possibilité d’extraire des stratégies opératoires valides à l’aide d’un arbre décisionnel utilisant des règles extraites de la littérature. Dans une troisième étude, la classification de 1776 patients avec la SIA à l’aide d’une carte de Kohonen, un type de réseaux de neurone a permis de démontrer qu’il existe des scoliose typiques (scoliose à courbes uniques ou double thoracique) pour lesquelles la variabilité dans le traitement chirurgical varie peu des recommandations par la classification de Lenke tandis que les scolioses a courbes multiples ou tangentielles à deux groupes de courbes typiques étaient celles avec le plus de variation dans la stratégie opératoire. Finalement, une plateforme logicielle a été développée intégrant chacune des études ci-dessus. Cette interface logicielle permet l’entrée de données radiologiques pour un patient scoliotique, classifie la SIA à l’aide de l’arbre décisionnel de classification et suggère une approche chirurgicale basée sur l’arbre décisionnel de stratégies opératoires. Une analyse de la correction post-opératoire obtenue démontre une tendance, bien que non-statistiquement significative, à une meilleure balance chez les patients opérés suivant la stratégie recommandée par la plateforme logicielle que ceux aillant un traitement différent. Les études exposées dans cette thèse soulignent que l’utilisation d’algorithmes d’intelligence artificielle dans la classification et l’élaboration de stratégies opératoires de la SIA peuvent être intégrées dans une plateforme logicielle et pourraient assister les chirurgiens dans leur planification préopératoire.
Resumo:
n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES.
Resumo:
Thomas-Fermi theory is developed to evaluate nuclear matrix elements averaged on the energy shell, on the basis of independent particle Hamiltonians. One- and two-body matrix elements are compared with the quantal results, and it is demonstrated that the semiclassical matrix elements, as function of energy, well pass through the average of the scattered quantum values. For the one-body matrix elements it is shown how the Thomas-Fermi approach can be projected on good parity and also on good angular momentum. For the two-body case, the pairing matrix elements are considered explicitly.
Resumo:
Three different drying methods, a forced convection double-pass solar drier (DPSD), typical cabinet type natural convection solar drier (CD) and traditional open-sun drying (OSD) were used for draying of bamboo shoots in central Vietnam. During drying the operational parameters such as drying temperature, relative humidity, air velocity, insolation and water evaporation have been recorded hourly. The mean drying temperatures and relative humidity in the drying chamber were 55.2°C, 23.7%; 47.5°C, 37,6%; 36.2°C, 47.8% in DPSD, CD and OSD, respectively. The mean global radiation during all experimental runs was 670 Wm^−2. The result also shows that fastest drying process was occurred in DPSD where the falling-rate period was achieved after 7 hours, in change to OSD where it took 16 hours. The overall drying efficiency was 23.11%, 15.83% and 9.73% in case of DPSD, CD and OSD, respectively. Although the construction cost of DPSD was significantly higher than in CD, the drying costs per one kilogram of bamboo shoots were by 42.8% lower in case of DPSD as compared to CD. Double-pass solar drier was found to be technically and economically suitable for drying of bamboo shoots under the specific conditions in central Vietnam and in all cases, the use of this drier led to considerable reduction in drying time in comparison to traditional open-sun drying.
Resumo:
Presentation given at the Al-Azhar Engineering First Conference, AEC’89, Dec. 9-12 1989, Cairo, Egypt. The paper presented at AEC'89 suggests an infinite storage scheme divided into one volume which is online and an arbitrary number of off-line volumes arranged into a linear chain which hold records which haven't been accessed recently. The online volume holds the records in sorted order (e.g. as a B-tree) and contains shortest prefixes of keys of records already pushed offline. As new records enter, older ones are retired to the first volume which is going offline next. Statistical arguments are given for the rate at which an off-line volume needs to be fetched to reload a record which had been retired before. The rate depends on the distribution of access probabilities as a function of time. Applications are medical records, production records or other data which need to be kept for a long time for legal reasons.
Resumo:
Financial protection is one of the objectives of health systems, which protects poor households from falling into poverty as a result of health care related expenses. Expanding prepayment schemes to the poor is difficult in developing countries because labor is largely informal. Providing health care free-at-point-of-service does not adequately target spending on the poorest, but occupation- or community-based schemes have also inherent limitations to achieve universal coverage. Colombia adopted a government-subsidized health insurance scheme (SHI) strategy. The political debate about increasing SHI enrollment needs evidence about the effectiveness of this scheme regarding financial protection. This study runs a four-part model to estimate the effect of SHI on out-of-pocket expenses by the poor that are currently uninsured, if they were enrolled in the SHI. The results show a 43% and 50% reduction in expenses at Bogotá and national level respectively, which confirms the effectiveness of SHI as a financial protection tool.
Resumo:
The energy decomposition scheme proposed in a recent paper has been realized by performing numerical integrations. The sample calculations carried out for some simple molecules show excellent agreement with the chemical picture of molecules, indicating that such an energy decomposition analysis can be useful from the point of view of connecting quantum mechanics with the genuine chemical concepts
Resumo:
A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals
Resumo:
A one-dimensional water column model using the Mellor and Yamada level 2.5 parameterization of vertical turbulent fluxes is presented. The model equations are discretized with a mixed finite element scheme. Details of the finite element discrete equations are given and adaptive mesh refinement strategies are presented. The refinement criterion is an "a posteriori" error estimator based on stratification, shear and distance to surface. The model performances are assessed by studying the stress driven penetration of a turbulent layer into a stratified fluid. This example illustrates the ability of the presented model to follow some internal structures of the flow and paves the way for truly generalized vertical coordinates. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The white paper ‘Pharmacy in England’ advocates establishing a new pharmacy regulator, building leadership and integrating undergraduate education.[1] Students must morph into competent pharmacists with the skills, expertise and confidence to lead the profession to 2020 and beyond.[2] One way individuals are encouraged to ‘professionalise’ is through participation in personal/professional development schemes. The British Pharmaceutical Students’ Association (BPSA) and the College of Pharmacy Practice have operated a professional development certificate (PDC) scheme since 2001. The scheme rewards students with a joint certificate for evidence of participation in five accredited activities in one academic year. Although the scheme is relevant to development of students, less than 2% of BPSA members take part annually. We wanted to understand the reasons for the low uptake. Our primary objectives were to examine the portrayal of the scheme and to investigate what it signifies to individuals. We describe our attempts to apply social marketing techniques[3] to the PDC, and we use ‘logical levels of change’[4] to highlight a paradox with personal identity.
Resumo:
In this paper, we apply one-list capture-recapture models to estimate the number of scrapie-affected holdings in Great Britain. We applied this technique to the Compulsory Scrapie Flocks Scheme dataset where cases from all the surveillance sources monitoring the presence of scrapie in Great Britain, the abattoir survey, the fallen stock survey and the statutory reporting of clinical cases, are gathered. Consequently, the estimates of prevalence obtained from this scheme should be comprehensive and cover all the different presentations of the disease captured individually by the surveillance sources. Two estimators were applied under the one-list approach: the Zelterman estimator and Chao's lower bound estimator. Our results could only inform with confidence the scrapie-affected holding population with clinical disease; this moved around the figure of 350 holdings in Great Britain for the period under study, April 2005-April 2006. Our models allowed the stratification by surveillance source and the input of covariate information, holding size and country of origin. None of the covariates appear to inform the model significantly. Crown Copyright (C) 2008 Published by Elsevier B.V. All rights reserved.