922 resultados para 2D Convolutional Codes
Resumo:
This paper presents the synthesis of the coordination polymers ∞[Ln(DPA)(HDPA)] (DPA=2,6-pyridinedicarboxylate; Ln= Tb and Gd), their structural and spectroscopic properties. The structural study reveals that the ∞[Ln(DPA)(HDPA)] has a single Ln+3 ion coordinated with two H2DPA ligands in tridentade coordination mode, while two others H2DPA establish a syn-bridge with a symmetry-related Ln3+, forming a two-dimensional structure. The spectroscopic studies show that ∞[Tb(DPA)(HDPA)] compound has high quantum yield (q x≈ 50.0%), due to the large contribution of radiative decay rate. Moreover triplet level is localized sufficiently over the emitter level 5D4 of theTb3+ ion, avoiding a retrotransference process between these states.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
Azole derivatives are the main therapeutical resource against Candida albicans infection in immunocompromised patients. Nevertheless, the widespread use of azoles has led to reduced effectiveness and selection of resistant strains. In order to guide the development of novel antifungal drugs, 2D-QSAR models based on topological descriptors or molecular fragments were developed for a dataset of 74 molecules. The optimal fragment-based model (r² = 0.88, q² = 0.73 and r²pred = 0.62 with 6PCs) and descriptor-based model (r² = 0.82, q² = 0.79 and r²pred = 0.70 with 2 PCs), when analysed synergically, suggested that the triazolone ring and lipophilic properties are both important to antifungal activity.
Resumo:
Ultrafast 2D NMR is a powerful methodology that allows recording of a 2D NMR spectrum in a fraction of second. However, due to the numerous non-conventional parameters involved in this methodology its implementation is no trivial task. Here, an optimized experimental protocol is carefully described to ensure efficient implementation of ultrafast NMR. The ultrafast spectra resulting from this implementation are presented based on the example of two widely used 2D NMR experiments, COSY and HSQC, obtained in 0.2 s and 41 s, respectively.
Resumo:
Master’s thesis focuses on the questions of crane electrics compliance with electrical safety standards. Overview and short comparison of the world’s effective standards in the field is made in order to understand their demands. Basic concepts of a proper electrical circuit design are presented. Characteristics, construction and operation principles of overcurrent protective devices are studied in details. Electrics of the basic crane is designed according to the assumed customer’s demands, compliance with the requirements of the standards is checked. Solutions to achieve better compliance in some issues are proposed. Accent is made on the National Electrical Code (NEC) and standards by Underwriters Laboratories (UL) latests demands. Requirements of the International Electrotechnical Commission (IEC) are also taken into account.
Resumo:
The purpose of this study was to analyze nursing ethics education from the perspective of nurses’ codes of ethics in the basic nursing education programmes in polytechnics in Finland with the following research questions: What is known about nurses’ codes in practice and education, what contents of the codes are taught, what teaching and evaluation methods are used, which demographic variables are associated with the teaching, what is nurse educators’ adequacy of knowledge to teach the codes and nursing students’ knowledge of and ability to apply the codes, and what are participants’ opinions of the need and applicability of the codes, and their importance in nursing ethics education. The aim of the study was to identify strengths and possible problem areas in teaching of the codes and nursing ethics in general. The knowledge gained from this study can be used for developing nursing ethics curricula and teaching of ethics in theory and practice. The data collection was targeted to all polytechnics in Finland providing basic nursing education (i.e. Bachelor of Health Care). The target groups were all nurse educators teaching ethics and all graduating nursing students in the academic year of 2006. A total of 183 educators and 214 students from 24 polytechnics participated. The data was collected using a structured questionnaire with four open-ended questions, designed for this study. The data was analysed by SPSS (14.0) and the open-ended questions by inductive content analysis. Descriptive statistics were used to summarize the data. Inferential statistics were used to estimate the differences between the participant groups. The reliability of the questionnaire was estimated with Cronbach’s coefficient alpha. The literature review revealed that empirical research on the codes was scarce, and minimal in the area of education. Teaching of nurses’ codes themselves and the embedded ethical concepts was extensive, teaching of the functions of the codes and related laws and agreements was moderate, but teaching of the codes of other health care professions was modest. Issues related to the nurse-patient relationship were emphasised. Wider social dimensions of the codes were less emphasized. Educators’ and students’ descriptions of teaching emphasized mainly the same teaching contents, but there were statistically significant differences between the groups in that educators assessed their teaching to be more extensive than what students had perceived it had been. T he use of teaching and evaluation methods was rather narrow and conventional. However, educators’ and students’ descriptions of the used methods differed statistically significantly. Students’ knowledge of the codes and their ability to apply them in practice was assessed as mediocre by educators and by students themselves. Most educators assessed their own knowledge of the codes as adequate to teach the codes, as did most of the students. Educators who regarded their knowledge as adequate taught the codes more extensively than those who assessed their knowledge as less adequate. Also students who assessed their educators’ knowledge as adequate perceived the teaching of the codes to be more extensive. Otherwise educators’ and students’ demographic variables had little association with their descriptions of the teaching. According to the participants, nurses need their own codes, and they are also regarded as applicable in practice. The codes are an important element in nursing ethics education, but their teaching needs development. Further research should focus on the organization of ethics teaching in the curricula, the teaching process, and on the evaluation of the effectiveness of ethics education and on educators’ competence. Also the meaning and functions of the codes at all levels of nursing deserve attention. More versatile use of research methods would be beneficial in gaining new knowledge.
Resumo:
Nowadays the Western companies are considered responsible for the social and environmental issues in their whole supply chains. To influence the practices of their suppliers the Western companies have created suppliers codes of conduct (SCCs) which express their requirements. Suppliers’ compliance with the SCCs is checked through audits. The purpose of this thesis is to analyze SCCs as a means for Western companies to ensure socially and environmentally responsible actions in their global supply chains, and the sub-objectives are to find out 1) how well do the SCCs and their auditing work at suppliers’ production sites and 2) how can possible problems related to SCCs and their auditing be solved. This is a qualitative research carried out in the form of a case study with two case companies. In this study both primary and secondary data is used. The primary data is collected in the form of interviews of the case company representatives and three external experts. Based on a theoretical framework of previous research in the fields of corporate social responsibility and supply chain management, a model with eleven factors, which influence the success of SCC implementation and the auditing of SCC –implementation, is drafted. Also several different best-practices to help to solve and avoid possible problems related to SCC -implementation and auditing have been identified from previous research. Based on the findings of this study the theoretical model has been updated adding two new influential factors. It seems that how well the SCC and its auditing work at suppliers’ production sites depends on the joint effect of thirteen influential factors: buyer’s purchasing policy, supplier’s motivation, buyer’s commitment, the solving of agency problems, the contents of the SCC, supplier’s role and the buyer-supplier –relationship, complexity of supply chain, the limitations of the smaller buyers, cooperation through a business association or multi-stakeholder system, the role of supplier’s employees, SCC –related communication and supplier’s understanding, cheating in audits and the auditors. The possible problems related to SCCs and their auditing can be solved by adopting best-practices. Nine of the theoretical best-practices stand out from the findings of this study: 1) two-way communication and collecting feedback from suppliers, 2) the philosophy of continuous improvement, 3) long-term business relationships with the supplier, 4) informing the supplier about the advantages of SCC –compliance, 5) rewarding code-compliant suppliers, 6) building collaborative, good buyer-supplier relationships, 7) supporting and advising the supplier, 8) joining a business association or multi-stakeholder system and 9) interviewing supplier’s employees as a part of the audits.
Resumo:
Distributed storage systems are studied. The interest in such system has become relatively wide due to the increasing amount of information needed to be stored in data centers or different kinds of cloud systems. There are many kinds of solutions for storing the information into distributed devices regarding the needs of the system designer. This thesis studies the questions of designing such storage systems and also fundamental limits of such systems. Namely, the subjects of interest of this thesis include heterogeneous distributed storage systems, distributed storage systems with the exact repair property, and locally repairable codes. For distributed storage systems with either functional or exact repair, capacity results are proved. In the case of locally repairable codes, the minimum distance is studied. Constructions for exact-repairing codes between minimum bandwidth regeneration (MBR) and minimum storage regeneration (MSR) points are given. These codes exceed the time-sharing line of the extremal points in many cases. Other properties of exact-regenerating codes are also studied. For the heterogeneous setup, the main result is that the capacity of such systems is always smaller than or equal to the capacity of a homogeneous system with symmetric repair with average node size and average repair bandwidth. A randomized construction for a locally repairable code with good minimum distance is given. It is shown that a random linear code of certain natural type has a good minimum distance with high probability. Other properties of locally repairable codes are also studied.
Resumo:
Microscopic visualization, especially in transparent micromodels, can provide valuable information to understand the transport phenomena at pore scale in different process occurring in porous materials (food, timber, soils, etc.). Micromodels studies focus mainly on the observation of multi-phase flow, which presents a greater proximity to reality. The aim of this study was to study the process of flexography and its application in the manufacture of polyester resin transparent micromodels and its application to carrots. Materials used to implement a flexo station for micromodels construction were thermoregulated water bath, exposure chamber to UV light, photosensitive substance (photopolymer), RTV silicone polyester resin, and glass plates. In this paper, data on size distribution of a particular kind of carrot we used, and a transparent micromodel with square cross-section as well as a Log-normal pore size distribution with pore radii ranging from 10 to 110 µm (average of 22 µm and micromodel size of 10 × 10 cm) were built. Finally, it stresses that it has successfully implemented the protocol processing 2D polyester resin transparent micromodels.
Resumo:
This thesis addresses the coolability of porous debris beds in the context of severe accident management of nuclear power reactors. In a hypothetical severe accident at a Nordic-type boiling water reactor, the lower drywell of the containment is flooded, for the purpose of cooling the core melt discharged from the reactor pressure vessel in a water pool. The melt is fragmented and solidified in the pool, ultimately forming a porous debris bed that generates decay heat. The properties of the bed determine the limiting value for the heat flux that can be removed from the debris to the surrounding water without the risk of re-melting. The coolability of porous debris beds has been investigated experimentally by measuring the dryout power in electrically heated test beds that have different geometries. The geometries represent the debris bed shapes that may form in an accident scenario. The focus is especially on heap-like, realistic geometries which facilitate the multi-dimensional infiltration (flooding) of coolant into the bed. Spherical and irregular particles have been used to simulate the debris. The experiments have been modeled using 2D and 3D simulation codes applicable to fluid flow and heat transfer in porous media. Based on the experimental and simulation results, an interpretation of the dryout behavior in complex debris bed geometries is presented, and the validity of the codes and models for dryout predictions is evaluated. According to the experimental and simulation results, the coolability of the debris bed depends on both the flooding mode and the height of the bed. In the experiments, it was found that multi-dimensional flooding increases the dryout heat flux and coolability in a heap-shaped debris bed by 47–58% compared to the dryout heat flux of a classical, top-flooded bed of the same height. However, heap-like beds are higher than flat, top-flooded beds, which results in the formation of larger steam flux at the top of the bed. This counteracts the effect of the multi-dimensional flooding. Based on the measured dryout heat fluxes, the maximum height of a heap-like bed can only be about 1.5 times the height of a top-flooded, cylindrical bed in order to preserve the direct benefit from the multi-dimensional flooding. In addition, studies were conducted to evaluate the hydrodynamically representative effective particle diameter, which is applied in simulation models to describe debris beds that consist of irregular particles with considerable size variation. The results suggest that the effective diameter is small, closest to the mean diameter based on the number or length of particles.
Resumo:
In this study, finite element analyses and experimental tests are carried out in order to investigate the effect of loading type and symmetry on the fatigue strength of three different non-load carrying welded joints. The current codes and recommendations do not give explicit instructions how to consider degree of bending in loading and the effect of symmetry in the fatigue assessment of welded joints. The fatigue assessment is done by using effective notch stress method and linear elastic fracture mechanics. Transverse attachment and cover plate joints are analyzed by using 2D plane strain element models in FEMAP/NxNastran and Franc2D software and longitudinal gusset case is analyzed by using solid element models in Abaqus and Abaqus/XFEM software. By means of the evaluated effective notch stress range and stress intensity factor range, the nominal fatigue strength is assessed. Experimental tests consist of the fatigue tests of transverse attachment joints with total amount of 12 specimens. In the tests, the effect of both loading type and symmetry on the fatigue strength is studied. Finite element analyses showed that the fatigue strength of asymmetric joint is higher in tensile loading and the fatigue strength of symmetric joint is higher in bending loading in terms of nominal and hot spot stress methods. Linear elastic fracture mechanics indicated that bending reduces stress intensity factors when the crack size is relatively large since the normal stress decreases at the crack tip due to the stress gradient. Under tensile loading, experimental tests corresponded with finite element analyzes. Still, the fatigue tested joints subjected to bending showed the bending increased the fatigue strength of non-load carrying welded joints and the fatigue test results did not fully agree with the fatigue assessment. According to the results, it can be concluded that in tensile loading, the symmetry of joint distinctly affects on the fatigue strength. The fatigue life assessment of bending loaded joints is challenging since it depends on whether the crack initiation or propagation is predominant.
Resumo:
Convolutional Neural Networks (CNN) have become the state-of-the-art methods on many large scale visual recognition tasks. For a lot of practical applications, CNN architectures have a restrictive requirement: A huge amount of labeled data are needed for training. The idea of generative pretraining is to obtain initial weights of the network by training the network in a completely unsupervised way and then fine-tune the weights for the task at hand using supervised learning. In this thesis, a general introduction to Deep Neural Networks and algorithms are given and these methods are applied to classification tasks of handwritten digits and natural images for developing unsupervised feature learning. The goal of this thesis is to find out if the effect of pretraining is damped by recent practical advances in optimization and regularization of CNN. The experimental results show that pretraining is still a substantial regularizer, however, not a necessary step in training Convolutional Neural Networks with rectified activations. On handwritten digits, the proposed pretraining model achieved a classification accuracy comparable to the state-of-the-art methods.