908 resultados para average complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the basis of the thermodynamics of Gibbs, the spinodal for the quasibinary system was derived in the framework of the Sanchez-Lacombe lattice fluid theory. All of the spinodals were calculated based on a model polydisperse polymer mixture, where each polymer contains three different molecular weight subcomponents. According to our calculations, the spinodal depends on both weight-average ((M) over bar (w)) and number-average ((M) over bar (n)) molecular weights, whereas that of the z-average molecular weight is invisible. Moreover, the extreme of the spinodal decreases when the polydispersity index (eta = (M) over bar (w)/(M) over bar (n)) of the polymer increases. The effect of polydispersity on the spinodal decreases when the molecular weight gets larger and can be negligible at a certain large molecular weight. It is well-known that the influence of polydispersity on the phase equilibrium (coexisting curve, cloud point curves) is much more pronounced than on the spinodal. The effect of M, on the spinodal is discussed as it results from the infuluence of composition temperatures, molecular weight, and the latter's distribution on free volume. An approximate expression, which is in the assumptions of v* v(1)* = v(2)* and 1/r --> 0 for both of the polymers, was also derived for simplification. It can be used in high molecular weight, although it failed to make visible the effect of number-average molecular weight on the spinodal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As an important part of petroleum exploration areas in the west of China, the north part of Qaidam basin is very promising in making great progress for petroleum discovery. But there are still many obstacles to overcome in understanding the process of petroleum formation and evaluation of oil & gas potential because of the complexity of geological evolution in the study area. Based upon the petroleum system theory, the process of petroleum formation is analyzed and the potential of oil & gas is evaluated in different petroleum systems by means of the modeling approach. The geological background for the formation of petroleum systems and the consisting elements of petroleum systems are described in detail. The thickness of strata eroded is estimated by means of vitrinite reflectance modeling, compaction parameter calculating and thickness extrapolating. The buried histories are reconstructed using the transient compaction model, which combines of forward and reverse modeling. The geo-history evolution consists of four stages - sedimentation in different rates with different areas and slow subsidence during Jurassic, uplifting and erosion during Cretaceous, fast subsidence during the early and middle periods of Tertiary, subsidence and uplifting in alternation during the late period of Tertiary and Quaternary. The thermal gradients in the study area are from 2.0 ℃/100m to 2.6 ℃/100m, and the average of heat flow is 50.6 mW/m~2. From the vitrinite reflectance and apatite fission track data, a new approach based up Adaptive Genetic Algorithms for thermal history reconstruction is presented and used to estimate the plaeo-heat flow. The results of modeling show that the heat flow decreased and the basin got cooler from Jurassic to now. Oil generation from kerogens, gas generation from kerogens and gas cracked from oil are modeled by kinetic models. The kinetic parameters are calculated from the data obtained from laboratory experiments. The evolution of source rock maturation is modeled by means of Easy %Ro method. With the reconstruction of geo-histories and thermal histories and hydrocarbon generation, the oil and gas generation intensities for lower and middle Jurassic source rocks in different time are calculated. The results suggest that the source rocks got into maturation during the time of Xiaganchaigou sedimentation. The oil & gas generation centers for lower Jurassic source rocks locate in Yikeyawuru sag, Kunteyi sag and Eboliang area. The centers of generation for middle Jurassic source rocks locate in Saishenteng faulted sag and Yuka faulted sag. With the evidence of bio-markers and isotopes of carbonates, the oil or gas in Lenghusihao, Lenghuwuhao, Nanbaxian and Mahai oilfields is from lower Jurassic source rocks, and the oil or gas in Yuka is from middle Jurassic source rocks. Based up the results of the modeling, the distribution of source rocks and occurrence of oil and gas, there should be two petroleum systems in the study area. The key moments for these two petroleum, J_1-R(!) and J_2-J_3, are at the stages of Xiaganchaigou-Shangyoushashan sedimentation and Xiayoushashan-Shizigou sedimentation. With the kinetic midels for oil generated from kerogen, gas generated from kerogen and oil cracked to gas, the amount of oil and gas generated at different time in the two petroleum systems is calculated. The cumulative amount of oil generated from kerogen, gas generated from kerogen and gas cracked from oil is 409.78 * 10~8t, 360518.40 * 10~8m~3, and 186.50 * 10~8t in J_1-R(!). The amount of oil and gas generated for accumulation is 223.28 * 10~8t and 606692.99 * 10~8m~3 in J_1-R(!). The cumulative amount of oil generated from kerogen, gas generated from kerogen and gas cracked from oil is 29.05 * 10~8t, 23025.29 * 10~8m~3 and 14.42 * 10~8t in J_2-J_3 (!). The amount of oil and gas generated for accumulation is 14.63 * 10~8t and 42055.44 * 10~8m~3 in J_2-J_3 (!). The total oil and gas potential is 9.52 * 10~8t and 1946.25 * 10~8m~3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Competency Assessment Method (CAM) is an important technique of Human Resource Management and Development in theory and practice, especially in personnel selection and training. Based on literatures of related fields, the thesis explored the feasibility of CAM in China. The main results found in this study are as follows: 1. Competencies scored in Behavioral Event Interviews (BEI) are not influenced by length of protocol, by performance in the preceding year. Average level and maximal level of complexity correlate significantly with length of protocol. Total competency frequency of outstanding executives is not significantly different from that of typical executives. These results support McCleland's view. But there is significant correlation between length of protocol and competency frequencies, which which is not agreed by McCleland. We found that competency scores using coding standard of average level and maximal level of complexity show more reliability than that using coding standard of competency frequencies. But this isn't confirmed by McCleland. 2. Inter-rater reliability was studied. The results indicated: total Category Agreement (CA) is 55.45%, over 70 percent of 20 competencies of the inter-rater reliability coefficients based on the classical test theory are significantly correlated. G coefficient based on the generalization theory is 0.85697. 3. Study of criterion sample shows that manager's competencies of China's communication enterprise are as follows: Impact and Influence, Organization Commitment, Information Seeking, Achievement Orientation, Team Leadership, Interpersonal Understanding, Initiative, Market Awareness, Self-confidence, Developing Others. This result in similar to the generic competency model of managers presented in Spencer's book. 4. CAM showed more advantages than the method of experts panel judgement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many current recognition systems use constrained search to locate objects in cluttered environments. Previous formal analysis has shown that the expected amount of search is quadratic in the number of model and data features, if all the data is known to come from a sinlge object, but is exponential when spurious data is included. If one can group the data into subsets likely to have come from a single object, then terminating the search once a "good enough" interpretation is found reduces the expected search to cubic. Without successful grouping, terminated search is still exponential. These results apply to finding instances of a known object in the data. In this paper, we turn to the problem of selecting models from a library, and examine the combinatorics of determining that a candidate object is not present in the data. We show that the expected search is again exponential, implying that naﶥ approaches to indexing are likely to carry an expensive overhead, since an exponential amount of work is needed to week out each of the incorrect models. The analytic results are shown to be in agreement with empirical data for cluttered object recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of its finite size) and partly due to insufficient information about the target function (because of finite number of samples). We make several observations about generalization error which are valid irrespective of the approximation scheme. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An investigation in innovation management and entrepreneurial management is conducted in this thesis. The aim of the research is to explore changes of innovation styles in the transformation process from a start-up company to a more mature phase of business, to predict in a second step future sustainability and the probability of success. As businesses grow in revenue, corporate size and functional complexity, various triggers, supporters and drivers affect innovation and company's success. In a comprehensive study more than 200 innovative and technology driven companies have been examined and compared to identify patterns in different performance levels. All of them have been founded under the same formal requirements of the Munich Business Plan Competition -a research approach which allowed a unique snapshot that only long-term studies would be able to provide. The general objective was to identify the correlation between different factors, as well as different dimensions, to incremental and radical innovations realised. The 12 hypothesis were formed to prove have been derived from a comprehensive literature review. The relevant academic and practitioner literature on entrepreneurial, innovation, and knowledge management as well as social network theory revealed that the concept of innovation has evolved significantly over the last decade. A review of over 15 innovation models/frameworks contributed to understand what innovation in context means and what the dimensions are. It appears that the complex theories of innovation can be described by the increasing extent of social ingredients in the explanation of innovativeness. Originally based on tangible forms of capital, and on the necessity of pull and technology push, innovation management is today integrated in a larger system. Therefore, two research instruments have been developed to explore the changes in innovations styles. The Innovation Management Audits (IMA Start-up and IMA Mature) provided statements related to product/service development, innovativeness in various typologies, resources for innovations, innovation capabilities in conjunction to knowledge and management, social networks as well as the measurement of outcomes to generate high-quality data for further exploration. In obtaining results the mature companies have been clustered in the performance level low, average and high, while the start-up companies have been kept as one cluster. Firstly, the analysis exposed that knowledge, the process of acquiring knowledge, interorganisational networks and resources for innovations are the most important driving factors for innovation and success. Secondly, the actual change of the innovation style provides new insights about the importance of focusing on sustaining success and innovation ii 16 key areas. Thirdly, a detailed overview of triggers, supporters and drivers for innovation and success for each dimension support decision makers in putting their company in the right direction. Fourthly, a critical review of contemporary strategic management in conjunction to the findings provides recommendation of how to apply well-known management tools. Last but not least, the Munich cluster is analysed providing an estimation of the success probability of the different performance cluster and start-up companies. For the analysis of the probability of success of the newly developed as well as statistically and qualitative validated ICP Model (Innovativeness, Capabilities & Potential) has been developed and applied. While the model was primarily developed to evaluate the probability of success of companies; it has equal application in the situation to measure innovativeness to identify the impact of various strategic initiatives within small or large enterprises. The main findings of the model are that competitor, and customer orientation and acquiring knowledge important for incremental and radical innovation. Formal and interorganisation networks are important to foster innovation but informal networks appear to be detrimental to innovation. The testing of the ICP model h the long term is recommended as one subject of further research. Another is to investigate some of the more intangible aspects of innovation management such as attitude and motivation of mangers. IV

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Joern Fischer, David B. Lindermayer, and Ioan Fazey (2004). Appreciating Ecological Complexity: Habitat Contours as a Conceptual Landscape Model. Conservation Biology, 18 (5)pp.1245-1253 RAE2008

Relevância:

20.00% 20.00%

Publicador:

Resumo:

National Science Foundation (CCR-998310); Army Research Office (DAAD19-02-1-0058)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For any q > 1, let MOD_q be a quantum gate that determines if the number of 1's in the input is divisible by q. We show that for any q,t > 1, MOD_q is equivalent to MOD_t (up to constant depth). Based on the case q=2, Moore has shown that quantum analogs of AC^(0), ACC[q], and ACC, denoted QAC^(0)_wf, QACC[2], QACC respectively, define the same class of operators, leaving q > 2 as an open question. Our result resolves this question, implying that QAC^(0)_wf = QACC[q] = QACC for all q. We also prove the first upper bounds for QACC in terms of related language classes. We define classes of languages EQACC, NQACC (both for arbitrary complex amplitudes) and BQACC (for rational number amplitudes) and show that they are all contained in TC^(0). To do this, we show that a TC^(0) circuit can keep track of the amplitudes of the state resulting from the application of a QACC operator using a constant width polynomial size tensor sum. In order to accomplish this, we also show that TC^(0) can perform iterated addition and multiplication in certain field extensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper demonstrates an optimal control solution to change of machine set-up scheduling based on dynamic programming average cost per stage value iteration as set forth by Cararnanis et. al. [2] for the 2D case. The difficulty with the optimal approach lies in the explosive computational growth of the resulting solution. A method of reducing the computational complexity is developed using ideas from biology and neural networks. A real time controller is described that uses a linear-log representation of state space with neural networks employed to fit cost surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this preliminary study is to identify signs of fatigue in specific muscle groups that in turn directly influence accuracy in professional darts. Electromyography (EMG) sensors are employed to monitor the electrical activity produced by skeletal muscles of the trunk and upper limb during throw. It is noted that the Flexor Pollicis Brevis muscle which controls the critical release action during throw shows signs of fatigue. This is accompanied by an inherent increase in mean integral EMG amplitude for a number of other throw related muscles indicating an attempt to maintain constant applied throwing force. A strong correlation is shown to exist between average score and decrease in mean integral ECG amplitude for the Flexor Pollicis Brevis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.