998 resultados para Binary cyclic code


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structures of the open chain amide carboxylic acid rac-cis-[2-(2-methoxyphenyl)carbamoyl]cyclohexane-1-carboxylic acid, C15H19NO4, (I) and the cyclic imides rac-cis-2-(4-methoxyphenyl)-3a,4,5,6,7,7-hexahydroisoindole-1,3-dione,C15H17NO3, (II), chiral cis-2-(3-carboxyphenyl)-3a,4,5,6,7,7a-hexahydroisoindole-1,3-dione, C15H15NO4,(III) and rac-cis-2-(4-carboxyphenyl)- 3a,4,5,6,7,7a-hexahydroisoindole-1,3-dione monohydrate, C15H15NO4. H2O) (IV), are reported. In the amide acid (I), the phenylcarbamoyl group is essentially planar [maximum deviation from the least-squares plane = 0.060(1)Ang. for the amide O atom], the molecules form discrete centrosymmetric dimers through intermolecular cyclic carboxy-carboxy O-H...O hydrogen-bonding interactions [graph set notation R2/2(8)]. The cyclic imides (II)--(IV) are conformationally similar, with comparable phenyl ring rotations about the imide N-C(aromatic) bond [dihedral angles between the benzene and isoindole rings = 51.55(7)deg. in (II), 59.22(12)deg. in (III) and 51.99(14)deg. in (IV). Unlike (II) in which only weak intermolecular C-H...O(imide) hydrogen bonding is present, the crystal packing of imides (III) and (IV) shows strong intermolecular carboxylic acid O-H...O hydrogen-bonding associations. With (III), these involve imide O-atom acceptors, giving one-dimensional zigzag chains [graph set C(9)], while with the monohydrate (IV), the hydrogen bond involves the partially disordered water molecule which also bridges molecules through both imide and carboxyl O-atom acceptors in a cyclic R4/4(12) association, giving a two-dimensional sheet structure. The structures reported here expand the structural data base for compounds of this series formed from the facile reaction of cis-cyclohexane-1,2-dicarboxylic anhydride with substituted anilines, in which there is a much larger incidence of cyclic imides compared to amide carboxylic acids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The structures of the compounds from the reaction of cis-cyclohexane-1,2-dicarboxylic anhydride with 4-chloroaniline [rac-N-(4-chlorophenyl)-2-carboxycycloclohexane-1-carboxamide] (1), 4-bromoaniline [2-(4-bromophenyl)-perhydroisoindolyl-1,3-dione] (2) and 3-hydroxy-4-carboxyaniline (5-aminosalicylic acid) [2-(3-hydroxy-4-carboxyphenyl)-perhydroisoindolyl-1,3-dione] (3) have been determined at 200 K. Crystals of the open-chain amide carboxylic acid 1 are orthorhombic, space group Pbcn, with unit cell dimensions a = 20.1753(10), b = 8.6267(4), c = 15.9940(9) Å, and Z = 8. Compounds 2 and 3 are cyclic imides, with 1 monoclinic having space group P21 and cell dimensions a = 11.5321(3), b = 6.7095(2), c = 17.2040(5) Å, β = 102.527(3)o. Compound 3 is orthorhombic with cell dimensions a = 6.4642(3), b = 12.8196(5), c = 16.4197(7) Å. Molecules of 1 form hydrogen-bonded cyclic dimers which are extended into a two-dimensional layered structure through amide-group associations: 3 forms into one-dimensional zigzag chains through carboxylic acid…imide O-atom hydrogen bonds, while compound 2 is essentially unassociated. With both cyclic imides 2 and 3, disorder is found which involves the presence of partial enantiomeric replacement of the cis-cyclohexane-1,2-substituted ring systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a combined structure for using real, complex, and binary valued vectors for semantic representation. The theory, implementation, and application of this structure are all significant. For the theory underlying quantum interaction, it is important to develop a core set of mathematical operators that describe systems of information, just as core mathematical operators in quantum mechanics are used to describe the behavior of physical systems. The system described in this paper enables us to compare more traditional quantum mechanical models (which use complex state vectors), alongside more generalized quantum models that use real and binary vectors. The implementation of such a system presents fundamental computational challenges. For large and sometimes sparse datasets, the demands on time and space are different for real, complex, and binary vectors. To accommodate these demands, the Semantic Vectors package has been carefully adapted and can now switch between different number types comparatively seamlessly. This paper describes the key abstract operations in our semantic vector models, and describes the implementations for real, complex, and binary vectors. We also discuss some of the key questions that arise in the field of quantum interaction and informatics, explaining how the wide availability of modelling options for different number fields will help to investigate some of these questions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper presents and discusses the use of dierent codes regarding the numerical simulation of a radial-in ow turbine. A radial-in ow turbine test case was selected from published literature [1] and commercial codes (Fluent and CFX) were used to perform the steady-state numerical simulations. An in-house compressible- ow simulation code, Eilmer3 [2] was also adapted in order to make it suitable to perform turbomachinery simulations and preliminary results are presented and discussed. The code itself as well as its adaptation, comprising the addition of terms for the rotating frame of reference, programmable boundary conditions for periodic boundaries and a mixing plane interface between the rotating and non-rotating blocks are also discussed. Several cases with dierent orders of complexity in terms of geometry were considered and the results were compared across the dierent codes. The agreement between these results and published data is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Refactoring is a common approach to producing better quality software. Its impact on many software quality properties, including reusability, maintainability and performance, has been studied and measured extensively. However, its impact on the information security of programs has received relatively little attention. In this work, we assess the impact of a number of the most common code-level refactoring rules on data security, using security metrics that are capable of measuring security from the viewpoint of potential information flow. The metrics are calculated for a given Java program using a static analysis tool we have developed to automatically analyse compiled Java bytecode. We ran our Java code analyser on various programs which were refactored according to each rule. New values of the metrics for the refactored programs then confirmed that the code changes had a measurable effect on information security.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article suggests that the issue of proportionality in anti-doping sanctions has been inconsistently dealt with by the Court of Arbitration for Sport (CAS). Given CAS’s pre-eminent role in interpreting and applying the World Anti-Doping Code under the anti-doping policies of its signatories, an inconsistent approach to the application of the proportionality principle will cause difficulties for domestic anti-doping tribunals seeking guidance as to the appropriateness of their doping sanctions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An optical system which performs the multiplication of binary numbers is described and proof-of-principle experiments are performed. The simultaneous generation of all partial products, optical regrouping of bit products, and optical carry look-ahead addition are novel features of the proposed scheme which takes advantage of the parallel operations capability of optical computers. The proposed processor uses liquid crystal light valves (LCLVs). By space-sharing the LCLVs one such system could function as an array of multipliers. Together with the optical carry look-ahead adders described, this would constitute an optical matrix-vector multiplier.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

User interfaces for source code editing are a crucial component in any software development environment, and in many editors visual annotations (overlaid on the textual source code) are used to provide important contextual information to the programmer. This paper focuses on the real-time programming activity of ‘cyberphysical’ programming, and considers the type of visual annotations which may be helpful in this programming context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first version of the Standard PREanalytical Code (SPREC) was developed in 2009 by the International Society for Biological and Environmental Repositories (ISBER) Biospecimen Science Working Group to facilitate documentation and communication of the most important preanalytical quality parameters of different types of biospecimens used for research. This same Working Group has now updated the SPREC to version 2.0, presented here, so that it contains more options to allow for recent technological developments. Existing elements have been fine tuned. An interface to the Biospecimen Reporting for Improved Study Quality (BRISQ) has been defined, and informatics solutions for SPREC implementation have been developed. A glossary with SPRECrelated definitions has also been added.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When crest-fixed thin steel roof cladding systems are subjected to wind uplift, local pull-through or pull-out failures occur prematurely at their screwed connections. During high wind events such as storms and cyclones these localised failures then lead to severe damage to buildings and their contents. In recent times, the use of thin steel battens/purlins has increased considerably. This has made the pull-out failures more critical in the design of steel cladding systems. Recent research has developed a design formula for the static pull-out strength of steel cladding systems. However, the effects of fluctuating wind uplift loading that occurs during high wind events are not known. Therefore a series of constant amplitude cyclic tests has been undertaken on connections between steel battens made of different thicknesses and steel grades, and screw fasteners with varying diameter and pitch. This paper presents the details of these cyclic tests and the results.