929 resultados para Coupled set
Resumo:
We consider a universal set of quantum gates encoded within a perturbed decoherence-free subspace of four physical qubits. Using second-order perturbation theory and a measuring device modelled by an infinite set of harmonic oscillators, simply coupled to the system, we show that continuous observation of the coupling agent induces inhibition of the decoherence due to spurious perturbations. We thus advance the idea of protecting or even creating a decoherence-free subspace for processing quantum information.
Resumo:
In this work we investigate the energy gap between the ground state and the first excited state in a model of two single-mode Bose-Einstein condensates coupled via Josephson tunnelling. The ene:rgy gap is never zero when the tunnelling interaction is non-zero. The gap exhibits no local minimum below a threshold coupling which separates a delocalized phase from a self-trapping phase that occurs in the absence of the external potential. Above this threshold point one minimum occurs close to the Josephson regime, and a set of minima and maxima appear in the Fock regime. Expressions for the position of these minima and maxima are obtained. The connection between these minima and maxima and the dynamics for the expectation value of the relative number of particles is analysed in detail. We find that the dynamics of the system changes as the coupling crosses these points.
Resumo:
Requirements for systems to continue to operate satisfactorily in the presence of faults has led to the development of techniques for the construction of fault tolerant software. This thesis addresses the problem of error detection and recovery in distributed systems which consist of a set of communicating sequential processes. A method is presented for the `a priori' design of conversations for this class of distributed system. Petri nets are used to represent the state and to solve state reachability problems for concurrent systems. The dynamic behaviour of the system can be characterised by a state-change table derived from the state reachability tree. Systematic conversation generation is possible by defining a closed boundary on any branch of the state-change table. By relating the state-change table to process attributes it ensures all necessary processes are included in the conversation. The method also ensures properly nested conversations. An implementation of the conversation scheme using the concurrent language occam is proposed. The structure of the conversation is defined using the special features of occam. The proposed implementation gives a structure which is independent of the application and is independent of the number of processes involved. Finally, the integrity of inter-process communications is investigated. The basic communication primitives used in message passing systems are seen to have deficiencies when applied to systems with safety implications. Using a Petri net model a boundary for a time-out mechanism is proposed which will increase the integrity of a system which involves inter-process communications.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
This paper considers the role of HR in ethics and social responsibility and questions why, despite an acceptance of a role in ethical stewardship, the HR profession appears to be reluctant to embrace its responsibilities in this area. The study explores how HR professionals see their role in relation to ethical stewardship of the organisation, and the factors that inhibit its execution. A survey of 113 UK-based HR professionals, working in both domestic and multinational corporations, was conducted to explore their perceptions of the role of HR in maintaining ethical and socially responsible action in their organisations, and to identify features of the organisational environment which might help or hinder this role being effectively carried out. The findings indicate that although there is a clear understanding of the expectations of ethical stewardship, HR professionals often face difficulties in fulfilling this role because of competing tensions and perceptions of their role within their organisations. A way forward is proposed, which draws on the positive individual factors highlighted in this research to explore how approaches to organisational development (through positive deviance) may reduce these tensions to enable the better fulfilment of ethical responsibilities within organisations. The involvement and active modelling of ethical behaviour by senior management, coupled with an open approach to surfacing organisational values and building HR procedures, which support socially responsible action, are crucial to achieving socially responsible organisations. Finally, this paper challenges the HR profession, through professional and academic institutions internationally, to embrace their role in achieving this. © 2013 Taylor & Francis.
Resumo:
The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, μXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.
Resumo:
The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, µXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.
Resumo:
In this study we present first results of a new model development, ECHAM5-JSBACH-wiso, where we have incorporated the stable water isotopes H218O and HDO as tracers in the hydrological cycle of the coupled atmosphere-land surface model ECHAM5-JSBACH. The ECHAM5-JSBACH-wiso model was run under present-day climate conditions at two different resolutions (T31L19, T63L31). A comparison between ECHAM5-JSBACH-wiso and ECHAM5-wiso shows that the coupling has a strong impact on the simulated temperature and soil wetness. Caused by these changes of temperature and the hydrological cycle, the d18O in precipitation also shows variations from -4 permil up to 4 permil. One of the strongest anomalies is shown over northeast Asia where, due to an increase of temperature, the d18O in precipitation increases as well. In order to analyze the sensitivity of the fractionation processes over land, we compare a set of simulations with various implementations of these processes over the land surface. The simulations allow us to distinguish between no fractionation, fractionation included in the evaporation flux (from bare soil) and also fractionation included in both evaporation and transpiration (from water transport through plants) fluxes. While the isotopic composition of the soil water may change for d18O by up to +8 permil:, the simulated d18O in precipitation shows only slight differences on the order of ±1 permil. The simulated isotopic composition of precipitation fits well with the available observations from the GNIP (Global Network of Isotopes in Precipitation) database.
Resumo:
Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.
Resumo:
The present document deals with the optimization of shape of aerodynamic profiles -- The objective is to reduce the drag coefficient on a given profile without penalising the lift coefficient -- A set of control points defining the geometry are passed and parameterized as a B-Spline curve -- These points are modified automatically by means of CFD analysis -- A given shape is defined by an user and a valid volumetric CFD domain is constructed from this planar data and a set of user-defined parameters -- The construction process involves the usage of 2D and 3D meshing algorithms that were coupled into own- code -- The volume of air surrounding the airfoil and mesh quality are also parametrically defined -- Some standard NACA profiles were used by obtaining first its control points in order to test the algorithm -- Navier-Stokes equations were solved for turbulent, steady-state ow of compressible uids using the k-epsilon model and SIMPLE algorithm -- In order to obtain data for the optimization process an utility to extract drag and lift data from the CFD simulation was added -- After a simulation is run drag and lift data are passed to the optimization process -- A gradient-based method using the steepest descent was implemented in order to define the magnitude and direction of the displacement of each control point -- The control points and other parameters defined as the design variables are iteratively modified in order to achieve an optimum -- Preliminary results on conceptual examples show a decrease in drag and a change in geometry that obeys to aerodynamic behavior principles
Resumo:
Idealized ocean models are known to develop intrinsic multidecadal oscillations of the meridional overturning circulation (MOC). Here we explore the role of ocean–atmosphere interactions on this low-frequency variability. We use a coupled ocean–atmosphere model set up in a flat-bottom aquaplanet geometry with two meridional boundaries. The model is run at three different horizontal resolutions (4°, 2° and 1°) in both the ocean and atmosphere. At all resolutions, the MOC exhibits spontaneous variability on multidecadal timescales in the range 30–40 years, associated with the propagation of large-scale baroclinic Rossby waves across the Atlantic-like basin. The unstable region of growth of these waves through the long wave limit of baroclinic instability shifts from the eastern boundary at coarse resolution to the western boundary at higher resolution. Increasing the horizontal resolution enhances both intrinsic atmospheric variability and ocean–atmosphere interactions. In particular, the simulated atmospheric annular mode becomes significantly correlated to the MOC variability at 1° resolution. An ocean-only simulation conducted for this specific case underscores the disruptive but not essential influence of air–sea interactions on the low-frequency variability. This study demonstrates that an atmospheric annular mode leading MOC changes by about 2 years (as found at 1° resolution) does not imply that the low-frequency variability originates from air–sea interactions.
Resumo:
In this work, we present a thorough assessment of the performance of some representative double-hybrid density functionals (revPBE0-DH-NL and B2PLYP-NL) as well as their parent hybrid and GGA counterparts, in combination with the most modern version of the nonlocal (NL) van der Waals correction to describe very large weakly interacting molecular systems dominated by noncovalent interactions. Prior to the assessment, an accurate and homogeneous set of reference interaction energies was computed for the supramolecular complexes constituting the L7 and S12L data sets by using the novel, precise, and efficient DLPNO-CCSD(T) method at the complete basis set limit (CBS). The correction of the basis set superposition error and the inclusion of the deformation energies (for the S12L set) have been crucial for obtaining precise DLPNO-CCSD(T)/CBS interaction energies. Among the density functionals evaluated, the double-hybrid revPBE0-DH-NL and B2PLYP-NL with the three-body dispersion correction provide remarkably accurate association energies very close to the chemical accuracy. Overall, the NL van der Waals approach combined with proper density functionals can be seen as an accurate and affordable computational tool for the modeling of large weakly bonded supramolecular systems.
Resumo:
Transcription by RNA polymerase can induce the formation of hypernegatively supercoiled DNA both in vivo and in vitro. This phenomenon has been explained by a “twin-supercoiled-domain” model of transcription where a positively supercoiled domain is generated ahead of the RNA polymerase and a negatively supercoiled domain behind it. In E. coli cells, transcription-induced topological change of chromosomal DNA is expected to actively remodel chromosomal structure and greatly influence DNA transactions such as transcription, DNA replication, and recombination. In this study, an IPTG-inducible, two-plasmid system was established to study transcription-coupled DNA supercoiling (TCDS) in E. coli topA strains. By performing topology assays, biological studies, and RT-PCR experiments, TCDS in E. coli topA strains was found to be dependent on promoter strength. Expression of a membrane-insertion protein was not needed for strong promoters, although co-transcriptional synthesis of a polypeptide may be required. More importantly, it was demonstrated that the expression of a membrane-insertion tet gene was not sufficient for the production of hypernegatively supercoiled DNA. These phenomenon can be explained by the “twin-supercoiled-domain” model of transcription where the friction force applied to E. coli RNA polymerase plays a critical role in the generation of hypernegatively supercoiled DNA. Additionally, in order to explore whether TCDS is able to greatly influence a coupled DNA transaction, such as activating a divergently-coupled promoter, an in vivo system was set up to study TCDS and its effects on the supercoiling-sensitive leu-500 promoter. The leu-500 mutation is a single A-to-G point mutation in the -10 region of the promoter controlling the leu operon, and the AT to GC mutation is expected to increase the energy barrier for the formation of a functional transcription open complex. Using luciferase assays and RT-PCR experiments, it was demonstrated that transient TCDS, “confined” within promoter regions, is responsible for activation of the coupled transcription initiation of the leu-500 promoter. Taken together, these results demonstrate that transcription is a major chromosomal remodeling force in E. coli cells.
Resumo:
A miniaturised gas analyser is described and evaluated based on the use of a substrate-integrated hollow waveguide (iHWG) coupled to a microsized near-infrared spectrophotometer comprising a linear variable filter and an array of InGaAs detectors. This gas sensing system was applied to analyse surrogate samples of natural fuel gas containing methane, ethane, propane and butane, quantified by using multivariate regression models based on partial least square (PLS) algorithms and Savitzky-Golay 1(st) derivative data preprocessing. The external validation of the obtained models reveals root mean square errors of prediction of 0.37, 0.36, 0.67 and 0.37% (v/v), for methane, ethane, propane and butane, respectively. The developed sensing system provides particularly rapid response times upon composition changes of the gaseous sample (approximately 2 s) due the minute volume of the iHWG-based measurement cell. The sensing system developed in this study is fully portable with a hand-held sized analyser footprint, and thus ideally suited for field analysis. Last but not least, the obtained results corroborate the potential of NIR-iHWG analysers for monitoring the quality of natural gas and petrochemical gaseous products.
Resumo:
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.