988 resultados para Code set


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A CIF é uma ferramenta universal desenvolvida pela OMS que permite a classificação de funcionalidade e incapacidade, através de uma visualização global do que condiciona o desempenho do indivíduo na concretização de atividades e na participação em ocupações. A ideologia da CIF e os seus componentes interrelacionam-se com a essência da TO, indo ao encontro dos modelos da profissão. As UCCI constituem uma atualidade em Portugal e o terapeuta ocupacional é um dos profissionais obrigatórios na equipa multidisciplinar destas unidades. Atendendo à relevância internacional da CIF, à sua ligação com a TO e à necessidade de tornar a CIF operacional na prática clínica diária dado que é uma ferramenta complexa e extensa, é objetivo deste estudo contribuir para a construção de um code set da CIF para terapeutas ocupacionais que exercem funções em UCCI, especificamente em UC, UMDR e ULDM. Para a concretização desta investigação, utilizou-se a técnica de Delphi, que envolveu duas rondas. Na primeira ronda foi possível contar com a participação de 37 terapeutas ocupacionais experientes na área, uma vez que exercem funções em UCCI, e na segunda ronda contou-se com a participação de 20 elementos. Obtiveram consenso na última ronda de Delphi um total de 96 categorias, constituindo esta listagem uma proposta de code set para UCCI. No que se refere às tipologias de unidades, 69 categorias obtiveram consenso em UC, 91 em UMDR e 41 em ULDM. Concluiu-se que a criação de code sets poderá constituir uma mais-valia em contexto de equipa multidisciplinar das UCCI, sendo uma forma de tornar a CIF operacional na prática clínica diária.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compreender a funcionalidade de uma criança é um desafio persistente em contextos de saúde e educação. Na tentativa de superar esse desafio, em 2007, a Organização Mundial de Saúde desenvolveu a Classificação Internacional de Funcionalidade, Incapacidade e Saúde para Crianças e Jovens (CIF-CJ) como o primeiro sistema de classificação universal para documentar a saúde e funcionalidade da criança. Apesar de a CIF-CJ não ser um instrumento de avaliação e intervenção, tem, no entanto, a capacidade de servir de enquadramento para o desenvolvimento de ferramentas adaptadas às necessidades dos seus utilizadores. Considerando que no contexto escolar, a escrita manual encontra-se entre as atividades mais requeridas para a participação plena de uma criança, parece ser pertinente a definição de um conjunto de códigos destinados a caracterizar o perfil de funcionalidade de uma criança, no que se refere à escrita manual. O objetivo deste estudo foi, pois, o desenvolvimento de um conjunto preliminar de códigos baseado na CIF-CJ que possa vir a constituir um code set para a escrita manual. Dada a complexidade do tema e atendendo a que se pretende alcançar consenso entre os especialistas sobre quais as categorias da CIF-CJ que devem ser consideradas, optou-se pela utilização da técnica de Delphi. A escolha da metodologia seguiu a orientação dos procedimentos adotados pelo projeto Core Set CIF. De dezoito profissionais contactados, obtiveram-se respostas de sete terapeutas ocupacionais com experiência em pediatria, que participaram em todas as rondas. No total, três rondas de questionários foram realizadas para atingir um consenso, com um nível de concordância, previamente definido, de 70%. Deste estudo resultou um conjunto preliminar de códigos com 54 categorias da CIF-CJ (16 categorias de segundo nível, 14 categorias de terceiro nível e uma categoria de quarto nível), das quais 31 são categorias das funções do corpo, uma categoria das estruturas do corpo, 12 categorias de atividades e participação e 10 categorias de fatores ambientais. Este estudo é um primeiro passo para o desenvolvimento de um code set para a escrita manual baseado na CIF-CJ , sendo claramente necessário a realização de mais pesquisas no contexto do desenvolvimento e da validação deste code set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we perform a thorough analysis of a spectral phase-encoded time spreading optical code division multiple access (SPECTS-OCDMA) system based on Walsh-Hadamard (W-H) codes aiming not only at finding optimal code-set selections but also at assessing its loss of security due to crosstalk. We prove that an inadequate choice of codes can make the crosstalk between active users to become large enough so as to cause the data from the user of interest to be detected by other user. The proposed algorithm for code optimization targets code sets that produce minimum bit error rate (BER) among all codes for a specific number of simultaneous users. This methodology allows us to find optimal code sets for any OCDMA system, regardless the code family used and the number of active users. This procedure is crucial for circumventing the unexpected lack of security due to crosstalk. We also show that a SPECTS-OCDMA system based on W-H 32(64) fundamentally limits the number of simultaneous users to 4(8) with no security violation due to crosstalk. More importantly, we prove that only a small fraction of the available code sets is actually immune to crosstalk with acceptable BER (<10(-9)) i.e., approximately 0.5% for W-H 32 with four simultaneous users, and about 1 x 10(-4)% for W-H 64 with eight simultaneous users.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed. The image is split into bit planes. Each bit plane is divided into windows or blocks of variable size. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube based two level function minimizer. The minimized cubes are encoded using a code set which satisfies the prefix property. Our technique of lossless compression involves linear prediction as a preprocessing step and has compression ratio comparable to that of JPEG lossless compression technique. Our lossy compression technique involves reducing the number of bit planes as a preprocessing step which incurs minimal loss in the information of the image. The bit planes that remain after preprocessing are compressed using our lossless compression technique based on Boolean minimization. Qualitatively one cannot visually distinguish between the original image and the lossy image and the value of mean square error is kept low. For mean square error value close to that of JPEG lossy compression technique, our method gives better compression ratio. The compression scheme is relatively slower while the decompression time is comparable to that of JPEG.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Faces are complex patterns that often differ in only subtle ways. Face recognition algorithms have difficulty in coping with differences in lighting, cameras, pose, expression, etc. We propose a novel approach for facial recognition based on a new feature extraction method called fractal image-set encoding. This feature extraction method is a specialized fractal image coding technique that makes fractal codes more suitable for object and face recognition. A fractal code of a gray-scale image can be divided in two parts – geometrical parameters and luminance parameters. We show that fractal codes for an image are not unique and that we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters – which are faster to compute. Results on a subset of the XM2VTS database are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Australian construction industry, reflecting a global trend, is moving towards the implementation of a voluntary code of practice (hereafter VCP) for occupational health and safety. The evidence suggests that highlyvisible clients and project management firms, in addition to their subcontractors, look set to embrace such a code. However, smaller firms not operating in high-profile contracting regimes may prove reticent to adopt a VCP. This paper incorporates qualitative data from a high-profile research project commissioned by Engineers Australia and supported by the Australian Contractors’ Association, Property Council of Australia, Royal Australian Institute of Architects, Association of Consulting Engineers Australia, Australian Procurement and Construction Council, Master Builders Australia and the Australian CRC for Construction Innovation. The paper aims to understand the factors that facilitate or prevent the uptake of the VCP by smaller firms, together with pathways to the adoption of a VCP by industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The management of models over time in many domains requires different constraints to apply to some parts of the model as it evolves. Using EMF and its meta-language Ecore, the development of model management code and tools usually relies on the meta- model having some constraints, such as attribute and reference cardinalities and changeability, set in the least constrained way that any model user will require. Stronger versions of these constraints can then be enforced in code, or by attaching additional constraint expressions, and their evaluations engines, to the generated model code. We propose a mechanism that allows for variations to the constraining meta-attributes of metamodels, to allow enforcement of different constraints at different lifecycle stages of a model. We then discuss the implementation choices within EMF to support the validation of a state-specific metamodel on model graphs when changing states, as well as the enforcement of state-specific constraints when executing model change operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Code of Banking Practice is one of the oldest examples of consumer protection provided through self-regulation in the Australian financial services sector. However, since the Banking Code was first released in 1993, the volume of consumer protection legislation applying to banks has increased exponentially and parts of the Banking Code that once provided new consumer rights have now been largely superseded by legislation. In light of the increasingly complex set of laws and regulations that govern the relationship between banks and their consumer and small business customers it could be argued that the Banking Code has a limited future role. However, an analysis of the Banking Code shows that it adds to the consumer protection standards provided by legislation and can continue to facilitate improvements in the standards of subscribing banks and of other institutions in the financial services sector. Self-regulation and industry codes should continue to be part of the regulatory mix. Any regulatory changes that flow from the recent Financial System Inquiry should also facilitate and support the self-regulation role, but the government should also consider further changes to encourage improvements in industry codes and ensure that the implicit regulatory benefits that are provided, in part, because of the existence of industry codes, are made explicit and made available only to code subscribers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Examines the symbolic significance of major events and their security provision in the historical and contemporary context of the European Code of Police Ethics. Stresses the potential of major events to set new practical policing and security standards of technology and in doing so necessitiate the maintenance of professional ethical standards for policing in Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of determining whether a Tanner graph for a linear block code has a stopping set of a given size is shown to be NT-complete.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of designing codes for specific applications using deterministic annealing. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. We have shown that the total distortion incurred in encoding a training set is related to the probability of correct reception over a symmetric channel. While conventional deterministic annealing make use of the Euclidean squared error distance measure, we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure, which is required in the error correcting, scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Code Division Multiple Access (CDMA) techniques, by far, had been applied to LAN problems by many investigators, An analytical study of well known algorithms for generation of Orthogonal codes used in FO-CDMA systems like those for prime, quasi-Prime, Optical Orthogonal and Matrix codes has been presented, Algorithms for OOCs like Greedy/Modified Greedy/Accelerated Greedy algorithms are implemented. Many speed-up enhancements. for these algorithms are suggested. A novel Synthetic Algorithm based on Difference Sets (SADS) is also proposed. Investigations are made to vectorise/parallelise SADS to implement the source code on parallel machines. A new matrix for code families of OOCs with different seed code-words but having the same (n,w,lambda) set is formulated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The capacity region of a two-user Gaussian Multiple Access Channel (GMAC) with complex finite input alphabets and continuous output alphabet is studied. When both the users are equipped with the same code alphabet, it is shown that, rotation of one of the user’s alphabets by an appropriate angle can make the new pair of alphabets not only uniquely decodable, but will result in enlargement of the capacity region. For this set-up, we identify the primary problem to be finding appropriate angle(s) of rotation between the alphabets such that the capacity region is maximally enlarged. It is shown that the angle of rotation which provides maximum enlargement of the capacity region also minimizes the union bound on the probability of error of the sumalphabet and vice-verse. The optimum angle(s) of rotation varies with the SNR. Through simulations, optimal angle(s) of rotation that gives maximum enlargement of the capacity region of GMAC with some well known alphabets such as M-QAM and M-PSK for some M are presented for several values of SNR. It is shown that for large number of points in the alphabets, capacity gains due to rotations progressively reduce. As the number of points N tends to infinity, our results match the results in the literature wherein the capacity region of the Gaussian code alphabet doesn’t change with rotation for any SNR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary form only given. A scheme for code compression that has a fast decompression algorithm, which can be implemented using simple hardware, is proposed. The effectiveness of the scheme on the TMS320C62x architecture that includes the overheads of a line address table (LAT) is evaluated and obtained compression rates ranging from 70% to 80%. Two schemes for decompression are proposed. The basic idea underlying the scheme is a simple clustering algorithm that partially maps a block of instructions into a set of clusters. The clustering algorithm is a greedy algorithm based on the frequency of occurrence of various instructions.