978 resultados para Common Scrambling Algorithm Stream Cipher
Resumo:
In this paper, we are proposing a methodology to determine the most efficient and least costly way of crew pairing optimization. We are developing a methodology based on algorithm optimization on Eclipse opensource IDE using the Java programming language to solve the crew scheduling problems.
Resumo:
British mammalogists have used two different systems for surveying the common dormouse Muscardinus avellanarius: a modified bird nest box with the entrance facing the tree trunk, and a smaller, cheaper model called a "nest tube". However, only few data comparing different nest box systems are currently available. To determine which system is more efficient, we compared the use of the large (GB-type) and small nest boxes (DE-type, a commercial wooden mouse trap without a door) in three Swiss forest. The presence of Muscardinus, potential competitors, and any evidence of occupation were examined in 60 pairs of nest boxes based on 2,280 nest box checks conducted over 5 years. Mean annual occupation and cumulative numbers of Muscardinus present were both significantly higher for the DE than for the GB boxes (64.6% versus 32.1%, and 149 versus 67 dormice, respectively). In contrast, the annual occupation by competitors including Glis glis, Apodemus spp. and hole-nesting birds was significantly higher in the GB than in the DE boxes in all forest (19-68% versus 0-16%, depending on the species and forest). These results suggest that smaller nest boxes are preferred by the common dormouse and are rarely occupied by competitors. These boxes hence appear to be preferable for studying Muscardinus populations.
Resumo:
Triglycerides are transported in plasma by specific triglyceride-rich lipoproteins; in epidemiological studies, increased triglyceride levels correlate with higher risk for coronary artery disease (CAD). However, it is unclear whether this association reflects causal processes. We used 185 common variants recently mapped for plasma lipids (P < 5 × 10(-8) for each) to examine the role of triglycerides in risk for CAD. First, we highlight loci associated with both low-density lipoprotein cholesterol (LDL-C) and triglyceride levels, and we show that the direction and magnitude of the associations with both traits are factors in determining CAD risk. Second, we consider loci with only a strong association with triglycerides and show that these loci are also associated with CAD. Finally, in a model accounting for effects on LDL-C and/or high-density lipoprotein cholesterol (HDL-C) levels, the strength of a polymorphism's effect on triglyceride levels is correlated with the magnitude of its effect on CAD risk. These results suggest that triglyceride-rich lipoproteins causally influence risk for CAD.
Resumo:
Chromosomal rearrangements are proposed to promote genetic differentiation between chromosomally differentiated taxa and therefore promote speciation. Due to their remarkable karyotypic polymorphism, the shrews of the Sorex araneus group were used to investigate the impact of chromosomal rearrangements on gene flow. Five intraspecific chromosomal hybrid zones characterized by different levels of karyotypic complexity were studied using 16 microsatellites markers. We observed low levels of genetic differentiation even in the hybrid zones with the highest karyotypic complexity. No evidence of restricted gene flow between differently rearranged chromosomes was observed. Contrary to what was observed at the interspecific level, the effect of chromosomal rearrangements on gene flow was undetectable within the S. araneus species.
Resumo:
Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.
Resumo:
Background: The analysis of the promoter sequence of genes with similar expression patterns isa basic tool to annotate common regulatory elements. Multiple sequence alignments are on thebasis of most comparative approaches. The characterization of regulatory regions from coexpressedgenes at the sequence level, however, does not yield satisfactory results in manyoccasions as promoter regions of genes sharing similar expression programs often do not shownucleotide sequence conservation.Results: In a recent approach to circumvent this limitation, we proposed to align the maps ofpredicted transcription factors (referred as TF-maps) instead of the nucleotide sequence of tworelated promoters, taking into account the label of the corresponding factor and the position in theprimary sequence. We have now extended the basic algorithm to permit multiple promotercomparisons using the progressive alignment paradigm. In addition, non-collinear conservationblocks might now be identified in the resulting alignments. We have optimized the parameters ofthe algorithm in a small, but well-characterized collection of human-mouse-chicken-zebrafishorthologous gene promoters.Conclusion: Results in this dataset indicate that TF-map alignments are able to detect high-levelregulatory conservation at the promoter and the 3'UTR gene regions, which cannot be detectedby the typical sequence alignments. Three particular examples are introduced here to illustrate thepower of the multiple TF-map alignments to characterize conserved regulatory elements inabsence of sequence similarity. We consider this kind of approach can be extremely useful in thefuture to annotate potential transcription factor binding sites on sets of co-regulated genes fromhigh-throughput expression experiments.
Resumo:
Controlled interbreedings were performed between distinct chromosomal races or forms of Sorex araneus coming from populations of Swiss and French Alps. Four mating pairs including homozygous individuals of the Vaud race, the Valais race, the Intermediate Vaud-Acrocentric form and the Acrocentric form have led to several heterozygous hybrid litters. The karyotypes of the parents were determined from cultures of cartilaginous cells. The karyotypes of the offsprings were determined with a classical method. The production of hybrids between different races suggests the absence of postmating barrier between the Vaud race and the Intermediate form, the Acrocentric form and the Intermediate form, the Acrocentric form and the Valais race.
Resumo:
Analytical results harmonisation is investigated in this study to provide an alternative to the restrictive approach of analytical methods harmonisation which is recommended nowadays for making possible the exchange of information and then for supporting the fight against illicit drugs trafficking. Indeed, the main goal of this study is to demonstrate that a common database can be fed by a range of different analytical methods, whatever the differences in levels of analytical parameters between these latter ones. For this purpose, a methodology making possible the estimation and even the optimisation of results similarity coming from different analytical methods was then developed. In particular, the possibility to introduce chemical profiles obtained with Fast GC-FID in a GC-MS database is studied in this paper. By the use of the methodology, the similarity of results coming from different analytical methods can be objectively assessed and the utility in practice of database sharing by these methods can be evaluated, depending on profiling purposes (evidential vs. operational perspective tool). This methodology can be regarded as a relevant approach for database feeding by different analytical methods and puts in doubt the necessity to analyse all illicit drugs seizures in one single laboratory or to implement analytical methods harmonisation in each participating laboratory.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
AIMS: Common carotid artery intima-media thickness (CCIMT) is widely used as a surrogate marker of atherosclerosis, given its predictive association with cardiovascular disease (CVD). The interpretation of CCIMT values has been hampered by the absence of reference values, however. We therefore aimed to establish reference intervals of CCIMT, obtained using the probably most accurate method at present (i.e. echotracking), to help interpretation of these measures. METHODS AND RESULTS: We combined CCIMT data obtained by echotracking on 24 871 individuals (53% men; age range 15-101 years) from 24 research centres worldwide. Individuals without CVD, cardiovascular risk factors (CV-RFs), and BP-, lipid-, and/or glucose-lowering medication constituted a healthy sub-population (n = 4234) used to establish sex-specific equations for percentiles of CCIMT across age. With these equations, we generated CCIMT Z-scores in different reference sub-populations, thereby allowing for a standardized comparison between observed and predicted ('normal') values from individuals of the same age and sex. In the sub-population without CVD and treatment (n = 14 609), and in men and women, respectively, CCIMT Z-scores were independently associated with systolic blood pressure [standardized βs 0.19 (95% CI: 0.16-0.22) and 0.18 (0.15-0.21)], smoking [0.25 (0.19-0.31) and 0.11 (0.04-0.18)], diabetes [0.19 (0.05-0.33) and 0.19 (0.02-0.36)], total-to-HDL cholesterol ratio [0.07 (0.04-0.10) and 0.05 (0.02-0.09)], and body mass index [0.14 (0.12-0.17) and 0.07 (0.04-0.10)]. CONCLUSION: We estimated age- and sex-specific percentiles of CCIMT in a healthy population and assessed the association of CV-RFs with CCIMT Z-scores, which enables comparison of IMT values for (patient) groups with different cardiovascular risk profiles, helping interpretation of such measures obtained both in research and clinical settings.
Resumo:
This study examines how MPEG-2 Transport Stream, used in DVB-T video transmission, can be reliably and efficiently transferred to remote locations over an MPLS network. All the relevant technologies used in this scenario are also discussed in the study. This study was done for Digita Oy, which is a major radio and television content distributor in Finland. The theoretical part of the study begins with the introduction to MPLS technology and continues with explanation of IP Multicast and its components. The fourth section discusses MPEG-2 and the formation and content of MPEG-2 Transport Stream. These technologies were studied in relevant literature and RFC documentation. After the theoretical part of the study, the test setup and the test cases are presented. The results of the test cases, and the conclusions that can be drawn based on them, are discussed in the last section of the study. The tests showed that it is possible to transfer digital video quite reliably over an MPLS network using IP Multicast. By configuring the equipment correctly, the recovery time of the network in case of a failure can be shortened remarkably. Also, the unwanted effect of other traffic on the critical video traffic can be eliminated by defining the Quality of Service parameters correctly. There are, however, some issues that need to be tested further before this setup can be used in broadcast networks. Reliable operation of IP Multicast and proper error correction are the main subjects for future testing.