993 resultados para Analytic key


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new species Bryodema nigrofrascia of the genus Bryodema Fieber, 1853 (Orthoptera, Acridoidea, Acrididae Oedipodinae) from China is described. A key to known species of the genus is given. The type specimens are deposited in the Northwest Plateau Institute of Biology, Chinese Academy of Sciences, Xining, Qinghai.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forage selection plays a prominent role in the process of returning cultivated lands back into grasslands. The conventional method of selecting forage species can only provide attempts for problem-solving without considering the relationships among the decision factors globally. Therefore, this study is dedicated to developing a decision support system to help farmers correctly select suitable forage species for the target sites. After collecting data through a field study, we developed this decision support system. It consists of three steps: (1) the analytic hierarchy process (AHP), (2) weights determination, and (3) decision making. In the first step, six factors influencing forage growth were selected by reviewing the related references and by interviewing experts. Then a fuzzy matrix was devised to determine the weight of each factor in the second step. Finally, a gradual alternative decision support system was created to help farmers choose suitable forage species for their lands in the third step. The results showed that the AHP and fuzzy logic are useful for forage selection decision making, and the proposed system can provide accurate results in a certain area (Gansu Province) of China.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

如何选择网络联盟企业的成员是组建网络联盟企业的关键。采用层次分析法建立了选择设计伙伴的数学模型,并在Matlab环境下开发了一个设计伙伴选择系统。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Application of long-term exploration for oil and gas shows that the reservoir technology of prediction is one of the most valuable methods. Quantitative analysis of reservoir complexity is also a key technology of reservoir prediction. The current reservoir technologies of prediction are based on the linear assumption of various physical relationships. Therefore, these technologies cannot handle complex reservoirs with thin sands, high heterogeneities in lithological composition and strong varieties in petrophysical properties. Based on the above-mentioned complex reservoir, this paper conducts a series of researches. Both the comprehending and the quantitative analysis of reservoir heterogeneities have been implemented using statistical and non-linear theories of geophysics. At the beginning, the research of random media theories about reservoir heterogeneities was researched in this thesis. One-dimensional (1-D) and two-dimensional (2-D) random medium models were constructed. The autocorrelation lengths of random medium described the mean scale of heterogeneous anomaly in horizontal and deep directions, respectively. The characteristic of random medium models were analyzed. We also studied the corresponding relationship between the reservoir heterogeneities and autocorrelation lengths. Because heterogeneity of reservoir has fractal nature, we described heterogeneity of reservoir by fractal theory based on analyzing of the one-dimensional (1-D) and two-dimensional (2-D) random medium models. We simulated two-dimensional (2-D) random fluctuation medium in different parameters. From the simulated results, we can know that the main features of the two-dimensional (2-D) random medium mode. With autocorrelation lengths becoming larger, scales of heterogeneous geologic bodies in models became bigger. In addition, with the autocorrelation lengths becoming very larger, the layer characteristic of the models is very obvious. It would be difficult to identify sandstone such as gritstone, clay, dense sandstone and gas sandstone and so on in the reservoir with traditional impedance inversion. According to the obvious difference between different lithologic and petrophysical impedance, we studied multi-scale reservoir heterogeneities and developed new technologies. The distribution features of reservoir lithological and petrophysical heterogeneities along vertical and transverse directions were described quantitatively using multi-scale power spectrum and heterogeneity spectrum methods in this paper. Power spectrum (P spectrum) describes the manner of the vertical distribution of reservoir lithologic and petrophysical parameters and the large-scale and small-scale heterogeneities along vertical direction. Heterogeneity spectrum (H spectrum) describes the structure of the reservoir lithologic and petrophysical parameters mainly, that is to say, proportional composition of each lithological and petrophysical heterogeneities are calculated in this formation. The method is more reasonable to describe the degree of transverse multi-scale heterogeneities in reservoir lithological and petrophysical parameters. Using information of sonic logs in Sulige oil field, two spectral methods have been applied to the oil field, and good analytic results have been obtained. In order to contrast the former researches, the last part is the multi-scale character analysis of reservoir based on the transmission character of wave using the wavelet transform. We discussed the method applied to demarcate sequence stratigraphy and also analyzed the reservoir interlayer heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sediment and diagenesis process of reservoir are the key controlling factors for the formation and distribution of hydrocarbon reservoir. For quite a long time, most of the research on sediment-diagenesis facise is mainly focusing on qualitative analysis. With the further development on exploration of oil field, the qualitative analysis alone can’t meet the requirements of complicated requirements of oil and gas exploreation, so the quantitative analysis of sediment-diagenesis facise and related facies modling have become more and more important. On the basis of the research result from stratum and sediment on GuLong Area Putaohua Oil Layer Group, from the basic principles of sedimentology, and with the support from the research result from field core and mining research results, the thesis mainly makes the research on the sediment types, the space framework of sands and the evolution rules of diagenesis while mainly sticking to the research on sediment systement analysis and diagenetic deformation, and make further quantitative classification on sediment-diageneses facies qualitatively, discussed the new way to divide the sediment-diagenesis facies, and offer new basis for reservoir exploration by the research. Through using statistics theory including factor analysis, cluster analysis and discriminant analysis, the thesis devided sediment-diagenesis facies quantitatively. This research method is innovative on studying sediment-diagenesis facies. Firstly, the factor analysis could study the main mechanism of those correlative variables in geologic body, and then could draw a conclusion on the control factors of fluid and capability of reservoir in the layer of studying area. Secondly, with the selected main parameter for the cluster analysis, the classification of diagenesis is mainly based on the data analysis, thus the subjective judgement from the investigator could be eliminated, besides the results could be more quantitative, which is helpful to the correlative statistical analysis, so one could get further study on the quantitative relations of each sediment-diagenesis facies type. Finally, with the reliablities of discriminant analysis cluster results, and the adoption of discriminant probability to formulate the chart, the thesis could reflect chorisogram of sediment-diagenesis facies for planar analysis, which leads to a more dependable analytic results.According to the research, with the multi-statistics analysis methods combinations, we could get quantitative analysis on sediment-diagenesis facies of reservoir, and the final result could be more reliable and also have better operability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By seismic tomography, interesting results have been achieved not only in the research of the geosphere with a large scale but also in the exploration of resources and projects with a small scale since 80'. Compared with traditional inversion methods, seismic tomography can offer more and detailed information about subsurface and has been being paid attention by more and more geophysicists. Since inversion based on forward modeling, we have studied and improved the methods to calculate seismic traveltimes and raypaths in isotropic and anisotropic media, and applied the improved forward methods to traveltime tomography. There are three main kinds of methods to calculate seismic traveltime field and its ray path distribution, which are ray-tracing theory, eikonal equation by the finite-difference and minimum traveltime tree algorithm. In ray tracing, five methods are introduced in the paper, including analytic ray tracing, ray shooting, ray bending, grid ray tracing and rectangle grid ray perturbation with three points. Finite-difference solution of eikonal equation is very efficient in calculation of seismic first-break, but is awkward in calculation of reflection traveltimes. We have put forward a idea to calculate traveltimes of reflected waves using a combining way of eikonal equation method and other one in order to improve its capability of dealing with reflection waves. The minimum traveltime tree algorithm has been studied with emphases. Three improved algorithms are put forward on the basis of basic algorithm of the minimum traveltime tree. The first improved algorithm is called raypath tracing backward minimum traveltime algorithm, in which not only wavelets from the current source but also wavelets from upper source points are all calculated. The algorithm can obviously improve the speed of calculating traveltimes and raypaths in layered or blocked homogeneous media and keep good accuracy. The second improved algorithm is raypath key point minimum traveltime algorithm in which traveltimes and raypaths are calculated with a view of key points of raypaths (key points of raypths mean the pivotal points which determine raypaths). The raypath key point method is developed on the basis of the first improved algorithm, and has better applicability. For example, it is very efficient even for inhomogeneous media. Another improved algorithm, double grid minimum traveltime tree algorithm, bases upon raypath key point scheme, in which a model is divided with two kinds of grids so that the unnecessary calculation can be left out. Violent undulation of curved interface often results in the phenomenon that there are no reflection points on some parts of interfaces where there should be. One efficacious scheme that curved interfaces are divided into segments, and these segments are treated respectively is presented to solve the problem. In addition, the approximation to interfaces with discrete grids leads to large errors in calculation of traveltimes and raypaths. Noting the point, we have thought a new method to remove the negative effect of mesh and to improve calculation accuracy by correcting the traveltimes with a little of additional calculation, and obtained better results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metacognitive illusions or metacognitive bias is a concept that is a homologous with metacognitve monitor accuracy. In the dissertation, metacognitive illusions mainly refers to the absolute differences between judgment of learning (JOL) and recall because individuals are misguided by some invalid cues or information. JOL is one kind of metacognitive judgments, which is the prediction about the future performance of learned materials. Its mechanism and accuracy are the key issues in the study of JOL. Cue-utilization framework proposed by Koriat (1997) summarized the previous findings and provided a significant advance in understanding how people make JOL. However, the model is not able to explain individual differences in the accuracy of JOL. From the perspective of people’s cognitive bound, our study use posterior associative word pairs easy to produce metacognitive bias to explore the deeper psychological mechanism of metacontive bias. Moreover, we plan to investigate the cause to result in higher metacognitive illusions of children with LD. Based on these, the study tries to look for the method of mending metacognitive illusions. At the same time, we will summarize the findings of this study and previous literatures, and propose a revesied theory for explaining children’s with LD cue selection and utilization according to Koriat’s cue-utilization model. The results of the present study indicated that: (1) Children showed stable metacognitive illusions for the weak associative and posterior associative word pairs, it was not true for strong associative word pairs. It was higher metacognitive illusions for children with LD than normal children. And it was significant grade differences for metacognitive illusions. A priori associative strength exerted a weaker effect on JOL than it did on recall. (2) Children with LD mainly utilized retrieval fluency to make JOL across immediate and delay conditions. However, for normal children, it showed some distinction between encoding fluency and retrieval fluency as potential cues for JOL across immediate and delay conditions. Obviously, children with LD lacked certain flexibility for cue selection and utilization. (3)When word pairs were new list, it showed higher metacognitve transfer effects for analytic inferential group than heuristic inferential group for normal children in the second block. And metacognitive relative accuracy got increased for both children with and without LD across the experimental conditions. However, it was significantly improved only for normal children in analytic inferential group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dream of pervasive computing is slowly becoming a reality. A number of projects around the world are constantly contributing ideas and solutions that are bound to change the way we interact with our environments and with one another. An essential component of the future is a software infrastructure that is capable of supporting interactions on scales ranging from a single physical space to intercontinental collaborations. Such infrastructure must help applications adapt to very diverse environments and must protect people's privacy and respect their personal preferences. In this paper we indicate a number of limitations present in the software infrastructures proposed so far (including our previous work). We then describe the framework for building an infrastructure that satisfies the abovementioned criteria. This framework hinges on the concepts of delegation, arbitration and high-level service discovery. Components of our own implementation of such an infrastructure are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As part of a larger research project in musical structure, a program has been written which "reads" scores encoded in an input language isomorphic to music notation. The program is believed to be the first of its kind. From a small number of parsing rules the program derives complex configurations, each of which is associated with a set of reference points in a numerical representation of a time-continuum. The logical structure of the program is such that all and only the defined classes of events are represented in the output. Because the basis of the program is syntactic (in the sense that parsing operations are performed on formal structures in the input string), many extensions and refinements can be made without excessive difficulty. The program can be applied to any music which can be represented in the input language. At present, however, it constitutes the first stage in the development of a set of analytic tools for the study of so-called atonal music, the revolutionary and little understood music which has exerted a decisive influence upon contemporary practice of the art. The program and the approach to automatic data-structuring may be of interest to linguists and scholars in other fields concerned with basic studies of complex structures produced by human beings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is anticipated that constrained devices in the Internet of Things (IoT) will often operate in groups to achieve collective monitoring or management tasks. For sensitive and mission-critical sensing tasks, securing multicast applications is therefore highly desirable. To secure group communications, several group key management protocols have been introduced. However, the majority of the proposed solutions are not adapted to the IoT and its strong processing, storage, and energy constraints. In this context, we introduce a novel decentralized and batch-based group key management protocol to secure multicast communications. Our protocol is simple and it reduces the rekeying overhead triggered by membership changes in dynamic and mobile groups and guarantees both backward and forward secrecy. To assess our protocol, we conduct a detailed analysis with respect to its communcation and storage costs. This analysis is validated through simulation to highlight energy gains. The obtained results show that our protocol outperforms its peers with respect to keying overhead and the mobility of members.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

University of Pretoria / MA Dissertation / Department of Practical Theology / Advised by Prof M J S Masango

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research described in this thesis focuses, principally, on synthesis of stable α-diazosulfoxides and investigation of their reactivity under various reaction conditions (transition-metal catalysed, photochemical, thermal and microwave) with a particular emphasis on the reactive intermediates and mechanistic aspects of the reaction pathways involved. In agreement with previous studies carried out on these compounds, the key reaction pathway of α-diazosulfoxides was found to be hetero-Wolff rearrangement to give α-oxosulfine intermediates. However, a competing reaction pathway involving oxygen migration from sulfur to oxygen was also observed. Critically, isomerisation of α-oxosulfine stereoisomers was observed directly by 1H NMR spectroscopy in this work and this observation accounts for the stereochemical outcomes of the various cycloaddition reactions, whether carried out with in situ trapping or with preformed solutions of sulfines. Furthermore, matrix isolation experiments have shown that electrocyclisation of α-oxosulfines to oxathiiranes takes place and this verifies the proposed mechanisms for enol and disulfide formation. The introductory chapter includes a brief literature review of the synthesis and reactivity of α-diazosulfoxides prior to the commencement of research in this field by the Maguire group. The Wolff rearrangement is also discussed and the characteristic reactions of a number of reactive intermediates (sulfines, sulfenes and oxathiiranes) are outlined. The use of microwave-assisted organic synthesis is also examined, specifically, in the context of α-diazocarbonyl compounds as substrates. The second chapter describes the synthesis of stable monocyclic and bicyclic lactone derivatives of α-diazosulfoxides from sulfide precursors according to established experimental procedures. Approaches to precursors of ketone and sulfimide derivatives of α-diazosulfoxides are also described. The third chapter examines the reactivity of α-diazosulfoxides under thermal, microwave, rhodium(II)-catalysed and photochemical conditions. Comparison of the results obtained under thermal and microwave conditions indicates that there was no evidence for any effect, other than thermal, induced by microwave irradiation. The results of catalyst studies involving several rhodium(II) carboxylate and rhodium(II) carboxamidate catalysts are outlined. Under photochemical conditions, sulfur extrusion is a significant reaction pathway while under thermal or transition metal catalysed conditions, oxygen extrusion is observed. One of the most important observations in this work was the direct spectroscopic observation (by 1H NMR) of interconversion of the E and Z-oxosulfines. Trapping of the α-oxosulfine intermediates as cycloadducts by reaction with 2,3-dimethyl-1,3-butadiene proved useful both synthetically and mechanistically. As the stereochemistry of the α-oxosulfine is retained in the cycloadducts, this provided an ideal method for characterisation of this key feature. In the case of one α-oxosulfine, a novel [2+2] cycloaddition was observed. Preliminary experiments to investigate the reactivity of an α-diazosulfone under rhodium(II) catalysis and microwave irradiation are also described. The fourth chapter describes matrix isolation experiments which were carried out in Rühr Universität, Bochum in collaboration with Prof. Wolfram Sander. These experiments provide direct spectroscopic evidence of an α-oxosulfine intermediate formed by hetero-Wolff rearrangement of an α-diazosulfoxide and subsequent cyclisation of the sulfine to an oxathiirane was also observed. Furthermore, it was possible to identify which stereoisomer of the α-oxosulfine was present in the matrix. A preliminary laser flash photolysis experiment is also discussed. The experimental details, including all spectral and analytical data, are reported at the end of each chapter. The structural interpretation of 1H NMR spectra of the cycloadducts, described in Chapter 3, is discussed in Appendix I.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Along with the growing demand for cryptosystems in systems ranging from large servers to mobile devices, suitable cryptogrophic protocols for use under certain constraints are becoming more and more important. Constraints such as calculation time, area, efficiency and security, must be considered by the designer. Elliptic curves, since their introduction to public key cryptography in 1985 have challenged established public key and signature generation schemes such as RSA, offering more security per bit. Amongst Elliptic curve based systems, pairing based cryptographies are thoroughly researched and can be used in many public key protocols such as identity based schemes. For hardware implementions of pairing based protocols, all components which calculate operations over Elliptic curves can be considered. Designers of the pairing algorithms must choose calculation blocks and arrange the basic operations carefully so that the implementation can meet the constraints of time and hardware resource area. This thesis deals with different hardware architectures to accelerate the pairing based cryptosystems in the field of characteristic two. Using different top-level architectures the hardware efficiency of operations that run at different times is first considered in this thesis. Security is another important aspect of pairing based cryptography to be considered in practically Side Channel Analysis (SCA) attacks. The naively implemented hardware accelerators for pairing based cryptographies can be vulnerable when taking the physical analysis attacks into consideration. This thesis considered the weaknesses in pairing based public key cryptography and addresses the particular calculations in the systems that are insecure. In this case, countermeasures should be applied to protect the weak link of the implementation to improve and perfect the pairing based algorithms. Some important rules that the designers must obey to improve the security of the cryptosystems are proposed. According to these rules, three countermeasures that protect the pairing based cryptosystems against SCA attacks are applied. The implementations of the countermeasures are presented and their performances are investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is to garner comparative insights so as to aid the development of the discourse on further education (FE) conceptualisation and the relationship of FE with educational disadvantage and employability. This aim is particularly relevant in Irish education parlance amidst the historical ambiguity surrounding the functioning of FE. The study sets out to critically engage with the education/employability/economy link (eee link). This involves a critique of issues relevant to participation (which extends beyond student activity alone to social relations generally and the dialogic participation of the disadvantaged), accountability (which extends beyond performance measures alone to encompass equality of condition towards a socially just end) and human capital (which extends to both collective and individual aspects within an educational culture). As a comparative study, there is a strong focus on providing a way of conceptualising and comparatively analysing FE policy internationally. The study strikes a balance between conceptual and practical concerns. A critical comparative policy analysis is the methodology that structures the study which is informed and progressed by a genealogical method to establish the context of each of the jurisdictions of England, the United States and the European Union. Genealogy allows the use of history to diagnose the present rather than explaining how the past has caused the present. The discussion accentuates the power struggles within education policy practice using what Fairclough calls a strategic critique as well as an ideological critique. The comparative nature of the study means that there is a need to be cognizant of the diverse cultural influences on policy deliberation. The study uses the theoretical concept of paradigmatic change to critically analyse the jurisdictions. To aid with the critical analysis, a conceptual framework for legislative functions is developed so as to provide a metalanguage for educational legislation. The specific contribution of the study, while providing a manner for understanding and progressing FE policy development in a globalized Ireland, is to clear the ground for a more well-defined and critically reflexive FE sector to operate and suggests a number of issues for further deliberation.