867 resultados para Measurement-based quantum computing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

JÄKÄLA-algoritmi (Jatkuvan Äänitehojakautuman algoritmi Käytävien Äänikenttien LAskentaan) ja sen NUMO- ja APPRO-laskentayhtälöt perustuvat käytävällä olevan todellisen äänilähteen kuvalähteiden symmetriaan. NUMO on algoritmin numeerisen ratkaisun ja APPRO likiarvoratkaisun laskentayhtälö. Algoritmia johdettaessa oletettiin, että absorptiomateriaali oli jakautunut tasaisesti käytävän ääntä heijastaville pinnoille. Suorakaiteen muotoisen käytävän kuvalähdetason muunto jatkuvaksi äänitehojakautumaksi sisältää kolme muokkausvaihetta. Aluksi suorakaiteen kuvalähdetaso muunnetaan neliön muotoiseksi. Seuraavaksi neliön muotoisen kuvalähdetason samanarvoiset kuvalähteet siirretään koordinaattiakselille diskreetiksi kuvalähdejonoksi. Lopuksi kuvalähdejono muunnetaan jatkuvaksi äänitehojakautumaksi, jolloin käytävän vastaanottopisteen äänenpainetaso voidaan laskea integroimalla jatkuvan äänitehojakautuman yli. JÄKÄLA-algoritmin validiteetin toteamiseksi käytettiin testattua kaupallista AKURI-ohjelmaa. AKURI-ohjelma antoi myös hyvän käsityksen siitä, miten NUMO- ja APPRO-yhtälöillä lasketut arvot mahdollisesti eroavat todellisilla käytävillä mitatuista arvoista. JÄKÄLA-algoritmin NUMO- ja APPRO-yhtälöitä testattiin myös vertaamalla niiden antamia tuloksia kolmen erityyppisen käytävän äänenpainetasomittauksiin. Tässä tutkimuksessa on osoitettu, että akustisen kuvateorian pohjalta on mahdollista johtaa laskenta-algoritmi, jota voidaan soveltaa pitkien käytävien äänikenttien pika-arvioinnissa paikan päällä. Sekä teoreettinen laskenta että käytännön äänenpainetasomittaukset todellisilla käytävillä osoittivat, että JÄKÄLA-algoritmin yhtälöiden ennustustarkkuus oli erinomainen ideaalikäytävillä ja hyvä niillä todellisilla käytävillä, joilla ei ollut ääntä heijastavia rakenteita. NUMO- ja APPRO-yhtälöt näyttäisivät toimivan hyvin käytävillä, joiden poikkileikkaus oli lähes neliön muotoinen ja joissa pintojen suurin absorptiokerroin oli korkeintaan kymmenen kertaa pienintä absorptiokerrointa suurempi. NUMO- ja APPRO-yhtälöiden suurin puute on, etteivät ne ota huomioon pintojen erilaisia absorptiokertoimia eivätkä esineistä heijastuvia ääniä. NUMO- ja APPRO- laskentayhtälöt poikkesivat mitatuista arvoista eniten käytävillä, joilla kahden vastakkaisen pinnan absorptiokerroin oli hyvin suuri ja toisen pintaparin hyvin pieni, ja käytävillä, joissa oli massiivisia, ääntä heijastavia pilareita ja palkkeja. JÄKÄLA-algoritmin NUMO- ja APPRO-yhtälöt antoivat tutkituilla käytävillä kuitenkin selvästi tarkempia arvoja kuin Kuttruffin likiarvoyhtälö ja tilastollisen huoneakustiikan perusyhtälö. JÄKÄLA-algoritmin laskentatarkkuutta on testattu vain neljällä todellisella käytävällä. Algoritmin kehittämiseksi tulisi jatkossa käytävän vastakkaisia pintoja ja niiden absorptiokertoimia käsitellä laskennassa pareittain. Algoritmin validiteetin varmistamiseksi on mittauksia tehtävä lisää käytävillä, joiden absorptiomateriaalien jakautumat poikkeavat toisistaan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis various aspects of memory effects in the dynamics of open quantum systems are studied. We develop a general theoretical framework for open quantum systems beyond the Markov approximation which allows us to investigate different sources of memory effects and to develop methods for harnessing them in order to realise controllable open quantum systems. In the first part of the Thesis a characterisation of non-Markovian dynamics in terms of information flow is developed and applied to study different sources of memory effects. Namely, we study nonlocal memory effects which arise due to initial correlations between two local environments and further the memory effects induced by initial correlations between the open system and the environment. The last part focuses on describing two all-optical experiment in which through selective preparation of the initial environment states the information flow between the system and the environment can be controlled. In the first experiment the system is driven from the Markovian to the non- Markovian regime and the degree of non-Markovianity is determined. In the second experiment we observe the nonlocal nature of the memory effects and provide a novel method to experimentally quantify frequency correlations in photonic environments via polarisation measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study concerns performance measurement and management in a collaborative network. Collaboration between companies has been increased in recent years due to the turbulent operating environment. The literature shows that there is a need for more comprehensive research on performance measurement in networks and the use of measurement information in their management. This study examines the development process and uses of a performance measurement system supporting performance management in a collaborative network. There are two main research questions: how to design a performance measurement system for a collaborative network and how to manage performance in a collaborative network. The work can be characterised as a qualitative single case study. The empirical data was collected in a Finnish collaborative network, which consists of a leading company and a reseller network. The work is based on five research articles applying various research methods. The research questions are examined at the network level and at the single network partner level. The study contributes to the earlier literature by producing new and deeper understanding of network-level performance measurement and management. A three-step process model is presented to support the performance measurement system design process. The process model has been tested in another collaborative network. The study also examines the factors affecting the process of designing the measurement system. The results show that a participatory development style, network culture, and outside facilitators have a positive effect on the design process. The study increases understanding of how to manage performance in a collaborative network and what kind of uses of performance information can be identified in a collaborative network. The results show that the performance measurement system is an applicable tool to manage the performance of a network. The results reveal that trust and openness increased during the utilisation of the performance measurement system, and operations became more transparent. The study also presents a management model that evaluates the maturity of performance management in a collaborative network. The model is a practical tool that helps to analyse the current stage of the performance management of a collaborative network and to develop it further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, cantilever-enhanced photoacoustic spectroscopy (CEPAS) was applied in different drug detection schemes. The study was divided into two different applications: trace detection of vaporized drugs and drug precursors in the gas-phase, and detection of cocaine abuse in hair. The main focus, however, was the study of hair samples. In the gas-phase, methyl benzoate, a hydrolysis product of cocaine hydrochloride, and benzyl methyl ketone (BMK), a precursor of amphetamine and methamphetamine were investigated. In the solid-phase, hair samples from cocaine overdose patients were measured and compared to a drug-free reference group. As hair consists mostly of long fibrous proteins generally called keratin, proteins from fingernails and saliva were also studied for comparison. Different measurement setups were applied in this study. Gas measurements were carried out using quantum cascade lasers (QLC) as a source in the photoacoustic detection. Also, an external cavity (EC) design was used for a broader tuning range. Detection limits of 3.4 particles per billion (ppb) for methyl benzoate and 26 ppb for BMK in 0.9 s were achieved with the EC-QCL PAS setup. The achieved detection limits are sufficient for realistic drug detection applications. The measurements from drug overdose patients were carried out using Fourier transform infrared (FTIR) PAS. The drug-containing hair samples and drug-free samples were both measured with the FTIR-PAS setup, and the measured spectra were analyzed statistically with principal component analysis (PCA). The two groups were separated by their spectra with PCA and proper spectral pre-processing. To improve the method, ECQCL measurements of the hair samples, and studies using photoacoustic microsampling techniques, were performed. High quality, high-resolution spectra with a broad tuning range were recorded from a single hair fiber. This broad tuning range of an EC-QCL has not previously been used in the photoacoustic spectroscopy of solids. However, no drug detection studies were performed with the EC-QCL solid-phase setup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The superconducting gap is a basic character of a superconductor. While the cuprates and conventional phonon-mediated superconductors are characterized by distinct d- and s-wave pairing symmetries with nodal and nodeless gap distributions respectively, the superconducting gap distributions in iron-based superconductors are rather diversified. While nodeless gap distributions have been directly observed in Ba1–xKxFe2As2, BaFe2–xCoxAs2, LiFeAs, KxFe2–ySe2, and FeTe1–xSex, the signatures of a nodal superconducting gap have been reported in LaOFeP, LiFeP, FeSe, KFe2As2, BaFe2–xRuxAs2, and BaFe2(As1–xPx)2. Due to the multiplicity of the Fermi surface in these compounds s± and d pairing states can be both nodeless and nodal. A nontrivial orbital structure of the order parameter, in particular the presence of the gap nodes, leads to effects in which the disorder is much richer in dx2–y2-wave superconductors than in conventional materials. In contrast to the s-wave case, the Anderson theorem does not work, and nonmagnetic impurities exhibit a strong pair-breaking influence. In addition, a finite concentration of disorder produces a nonzero density of quasiparticle states at zero energy, which results in a considerable modification of the thermodynamic and transport properties at low temperatures. The influence of order parameter symmetry on the vortex core structure in iron-based pnictide and chalcogenide superconductors has been investigated in the framework of quasiclassical Eilenberger equations. The main results of the thesis are as follows. The vortex core characteristics, such as, cutoff parameter, ξh, and core size, ξ2, determined as the distance at which density of the vortex supercurrent reaches its maximum, are calculated in wide temperature, impurity scattering rate, and magnetic field ranges. The cutoff parameter, ξh(B; T; Г), determines the form factor of the flux-line lattice, which can be obtained in _SR, NMR, and SANS experiments. A comparison among the applied pairing symmetries is done. In contrast to s-wave systems, in dx2–y2-wave superconductors, ξh/ξc2 always increases with the scattering rate Г. Field dependence of the cutoff parameter affects strongly on the second moment of the magnetic field distributions, resulting in a significant difference with nonlocal London theory. It is found that normalized ξ2/ξc2(B/Bc2) dependence is increasing with pair-breaking impurity scattering (interband scattering for s±-wave and intraband impurity scattering for d-wave superconductors). Here, ξc2 is the Ginzburg-Landau coherence length determined from the upper critical field Bc2 = Φ0/2πξ2 c2, where Φ0 is a flux quantum. Two types of ξ2/ξc2 magnetic field dependences are obtained for s± superconductors. It has a minimum at low temperatures and small impurity scattering transforming in monotonously decreasing function at strong scattering and high temperatures. The second kind of this dependence has been also found for d-wave superconductors at intermediate and high temperatures. In contrast, impurity scattering results in decreasing of ξ2/ξc2(B/Bc2) dependence in s++ superconductors. A reasonable agreement between calculated ξh/ξc2 values and those obtained experimentally in nonstoichiometric BaFe2–xCoxAs2 (μSR) and stoichiometric LiFeAs (SANS) was found. The values of ξh/ξc2 are much less than one in case of the first compound and much more than one for the other compound. This is explained by different influence of two factors: the value of impurity scattering rate and pairing symmetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smart phones became part and parcel of our life, where mobility provides a freedom of not being bounded by time and space. In addition, number of smartphones produced each year is skyrocketing. However, this also created discrepancies or fragmentation among devices and OSes, which in turn made an exceeding hard for developers to deliver hundreds of similar featured applications with various versions for the market consumption. This thesis is an attempt to investigate whether cloud based mobile development platforms can mitigate and eventually eliminate fragmentation challenges. During this research, we have selected and analyzed the most popular cloud based development platforms and tested integrated cloud features. This research showed that cloud based mobile development platforms may able to reduce mobile fragmentation and enable to utilize single codebase to deliver a mobile application for different platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

R,S-sotalol, a ß-blocker drug with class III antiarrhythmic properties, is prescribed to patients with ventricular, atrial and supraventricular arrhythmias. A simple and sensitive method based on HPLC-fluorescence is described for the quantification of R,S-sotalol racemate in 500 µl of plasma. R,S-sotalol and its internal standard (atenolol) were eluted after 5.9 and 8.5 min, respectively, from a 4-micron C18 reverse-phase column using a mobile phase consisting of 80 mM KH2PO4, pH 4.6, and acetonitrile (95:5, v/v) at a flow rate of 0.5 ml/min with detection at lex = 235 nm and lem = 310 nm, respectively. This method, validated on the basis of R,S-sotalol measurements in spiked blank plasma, presented 20 ng/ml sensitivity, 20-10,000 ng/ml linearity, and 2.9 and 4.8% intra- and interassay precision, respectively. Plasma sotalol concentrations were determined by applying this method to investigate five high-risk patients with atrial fibrillation admitted to the Emergency Service of the Medical School Hospital, who received sotalol, 160 mg po, as loading dose. Blood samples were collected from a peripheral vein at zero, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 6.0, 8.0, 12.0 and 24.0 h after drug administration. A two-compartment open model was applied. Data obtained, expressed as mean, were: CMAX = 1230 ng/ml, TMAX = 1.8 h, AUCT = 10645 ng h-1 ml-1, Kab = 1.23 h-1, a = 0.95 h-1, ß = 0.09 h-1, t(1/2)ß = 7.8 h, ClT/F = 3.94 ml min-1 kg-1, and Vd/F = 2.53 l/kg. A good systemic availability and a fast absorption were obtained. Drug distribution was reduced to the same extent in terms of total body clearance when patients and healthy volunteers were compared, and consequently elimination half-life remained unchanged. Thus, the method described in the present study is useful for therapeutic drug monitoring purposes, pharmacokinetic investigation and pharmacokinetic-pharmacodynamic sotalol studies in patients with tachyarrhythmias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Master Thesis we discuss issues related to the measurement of the effective scattering surface, based on the Doppler Effect. Modeling of the detected signal was made. Narrowband signal filtering using low-frequency amplifier was observed. Parameters of the proposed horn antennas were studied; radar cross section charts for three different objects were received.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structure and optical properties of thin films based on C60 materials are studied. Reproducible vacuum method of thin fullerene films production with Cd impurity on Si, glass and mica surfaces is developed. Surface morphology of the films are investigated by AFM and SEM methods. The ab initio quantum - chemical calculations of the geometry, total energy and excited energy states of complex fullerene- cadmium telluride supramolecules are performed. Photoluminescence spectra of composite thin films based on C60 before and after X-ray irradiation were measured. The intensity of additional peaks is defined as the charge composition due to the type of substrate. These results are interpreted as an appearance of the dipole-allowed transitions in the fullerene excited singlet states spectrum cause of an interference with cadmium telluride. X-ray irradiated films were investigated, and additional peaks in photoluminescence spectra were detected. These peaks appear as a result of molecular complexes formation from C60CdTe mixture and dimerization of the films. Density functional B3LYP quantum-chemical calculations for C60CdTe, molecular complexes, (C60)2 and C120O dimers were performed to elucidate some experimental results.