958 resultados para second-order model
Resumo:
Die Themengebiete dieser Arbeit umfassen sowohl methodische Weiterentwicklungen im Rahmen der ab initio zweiter Ordnungsmethoden CC2 und ADC(2) als auch Anwendungen dieser Weiterentwick-lungen auf aktuelle Fragestellungen. Die methodischen Erweiterungen stehen dabei hauptsächlich im Zusammenhang mit Übergangsmomenten zwischen angeregten Zuständen. Durch die Implementie-rung der selbigen ist nun die Berechnung transienter Absorptionsspektren möglich. Die Anwendungen behandeln vorwiegend das Feld der organischen Halbleiter und deren photo-elektronische Eigen-schaften. Dabei spielen die bislang wenig erforschten Triplett-Excimere eine zentrale Rolle.rnDie Übergangsmomente zwischen angeregten Zuständen wurden in das Programmpaket TUR-BOMOLE implementiert. Dadurch wurde die Berechnung der Übergangsmomente zwischen Zustän-den gleicher Multiplizität (d.h. sowohl Singulett-Singulett- als auch Triplett-Triplett-Übergänge) und unterschiedlicher Multiplizität (also Singulett-Triplett-Übergänge) möglich. Als Erweiterung wurde durch ein Interface zum ORCA Programm die Berechnung von Spin-Orbit-Matrixelementen (SOMEs) implementiert. Des Weiteren kann man mit dieser Implementierung auch Übergänge in offenschaligen Systemen berechnen. Um den Speicherbedarf und die Rechenzeit möglichst gering zu halten wurde die resolution-of-the-identity (RI-) Näherung benutzt. Damit lässt sich der Speicherbedarf von O(N4) auf O(N3) reduzieren, da die mit O(N4) skalierenden Größen (z. B. die T2-Amplituden) sehr effizient aus RI-Intermediaten berechnet werden können und daher nicht abgespeichert werden müssen. Dadurch wird eine Berechnung für mittelgroße Moleküle (ca. 20-50 Atome) mit einer angemessenen Basis möglich.rnDie Genauigkeit der Übergangsmomente zwischen angeregten Zuständen wurde für einen Testsatz kleiner Moleküle sowie für ausgewählte größere organische Moleküle getestet. Dabei stellte sich her-aus, dass der Fehler der RI-Näherung sehr klein ist. Die Vorhersage der transienten Spektren mit CC2 bzw. ADC(2) birgt allerdings ein Problem, da diese Methoden solche Zustände nur sehr unzureichend beschreiben, welche hauptsächlich durch zweifach-Anregungen bezüglich der Referenzdeterminante erzeugt werden. Dies ist für die Spektren aus dem angeregten Zustand relevant, da Übergänge zu diesen Zuständen energetisch zugänglich und erlaubt sein können. Ein Beispiel dafür wird anhand eines Singulett-Singulett-Spektrums in der vorliegenden Arbeit diskutiert. Für die Übergänge zwischen Triplettzuständen ist dies allerdings weniger problematisch, da die energetisch niedrigsten Doppelan-regungen geschlossenschalig sind und daher für Tripletts nicht auftreten.rnVon besonderem Interesse für diese Arbeit ist die Bildung von Excimeren im angeregten Triplettzu-stand. Diese können aufgrund starker Wechselwirkungen zwischen den π-Elektronensystemen großer organischer Moleküle auftreten, wie sie zum Beispiel als organische Halbleiter in organischen Leucht-dioden eingesetzt werden. Dabei können die Excimere die photo-elktronischen Eigenschaften dieser Substanzen signifikant beeinflussen. Im Rahmen dieser Dissertation wurden daher zwei solcher Sys-teme untersucht, [3.3](4,4’)Biphenylophan und das Naphthalin-Dimer. Hierzu wurden die transienten Anregungsspektren aus dem ersten angeregten Triplettzustand berechnet und diese Ergebnisse für die Interpretation der experimentellen Spektren herangezogen. Aufgrund der guten Übereinstimmung zwischen den berechneten und den experimentellen Spektren konnte gezeigt werden, dass es für eine koplanare Anordnung der beiden Monomere zu einer starken Kopplung zwischen lokal angereg-ten und charge-transfer Zuständen kommt. Diese Kopplung resultiert in einer signifikanten energeti-schen Absenkung des ersten angeregten Zustandes und zu einem sehr geringen Abstand zwischen den Monomereinheiten. Dabei ist der angeregte Zustand über beide Monomere delokalisiert. Die star-ke Kopplung tritt bei einem intermolekularen Abstand ≤4 Å auf, was einem typischen Abstand in orga-nischen Halbleitern entspricht. In diesem Bereich kann man zur Berechnung dieser Systeme nicht auf die Förster-Dexter-Theorie zurückgreifen, da diese nur für den Grenzfall der schwachen Kopplung gültig ist.
Resumo:
Relativistic effects need to be considered in quantum-chemical calculations on systems including heavy elements or when aiming at high accuracy for molecules containing only lighter elements. In the latter case, consideration of relativistic effects via perturbation theory is an attractive option. Among the available techniques, Direct Perturbation Theory (DPT) in its lowest order (DPT2) has become a standard tool for the calculation of relativistic corrections to energies and properties.In this work, the DPT treatment is extended to the next order (DPT4). It is demonstrated that the DPT4 correction can be obtained as a second derivative of the energy with respect to the relativistic perturbation parameter. Accordingly, differentiation of a suitable Lagrangian, thereby taking into account all constraints on the wave function, provides analytic expressions for the fourth-order energy corrections. The latter have been implemented at the Hartree-Fock level and within second-order Møller-Plesset perturbaton theory using standard analytic second-derivative techniques into the CFOUR program package. For closed-shell systems, the DPT4 corrections consist of higher-order scalar-relativistic effects as well as spin-orbit corrections with the latter appearing here for the first time in the DPT series.Relativistic corrections are reported for energies as well as for first-order electrical properties and compared to results from rigorous four-component benchmark calculations in order to judge the accuracy and convergence of the DPT expansion for both the scalar-relativistic as well as the spin-orbit contributions. Additionally, the importance of relativistic effects to the bromine and iodine quadrupole-coupling tensors is investigated in a joint experimental and theoretical study concerning the rotational spectra of CH2BrF, CHBrF2, and CH2FI.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
The excitonic splitting between the S-1 and S-2 electronic states of the doubly hydrogen-bonded dimer 2-pyridone center dot 6-methyl-2-pyridone (2PY center dot 6M2PY) is studied in a supersonic jet, applying two-color resonant two-photon ionization (2C-R2PI), UV-UV depletion, and dispersed fluorescence spectroscopies. In contrast to the C-2h symmetric (2-pyridone) 2 homodimer, in which the S-1 <- S-0 transition is symmetry-forbidden but the S-2 <- S-0 transition is allowed, the symmetry-breaking by the additional methyl group in 2PY center dot 6M2PY leads to the appearance of both the S-1 and S-2 origins, which are separated by Delta(exp) = 154 cm(-1). When combined with the separation of the S-1 <- S-0 excitations of 6M2PY and 2PY, which is delta = 102 cm(-1), one obtains an S-1/S-2 exciton coupling matrix element of V-AB, el = 57 cm(-1) in a Frenkel-Davydov exciton model. The vibronic couplings in the S-1/S-2 <- S-0 spectrum of 2PY center dot 6M2PY are treated by the Fulton-Gouterman single-mode model. We consider independent couplings to the intramolecular 6a' vibration and to the intermolecular sigma' stretch, and obtain a semi-quantitative fit to the observed spectrum. The dimensionless excitonic couplings are C(6a') = 0.15 and C(sigma') = 0.05, which places this dimer in the weak-coupling limit. However, the S-1/S-2 state exciton splittings Delta(calc) calculated by the configuration interaction singles method (CIS), time-dependent Hartree-Fock (TD-HF), and approximate second-order coupled-cluster method (CC2) are between 1100 and 1450 cm(-1), or seven to nine times larger than observed. These huge errors result from the neglect of the coupling to the optically active intra-and intermolecular vibrations of the dimer, which lead to vibronic quenching of the purely electronic excitonic splitting. For 2PY center dot 6M2PY the electronic splitting is quenched by a factor of similar to 30 (i.e., the vibronic quenching factor is Gamma(exp) = 0.035), which brings the calculated splittings into close agreement with the experimentally observed value. The 2C-R2PI and fluorescence spectra of the tautomeric species 2-hydroxypyridine center dot 6-methyl-2-pyridone (2HP center dot 6M2PY) are also observed and assigned. (C) 2011 American Institute of Physics.
Resumo:
The Multiple Affect Adjective Check List (MAACL) has been found to have five first-order factors representing Anxiety, Depression, Hostility, Positive Affect, and Sensation Seeking and two second-order factors representing Positive Affect and Sensation Seeking (PASS) and Dysphoria. The present study examines whether these first- and second-order conceptions of affect (based on R-technique factor analysis) can also account for patterns of intraindividual variability in affect (based on P-technique factor analysis) in eight elderly women. Although the hypothesized five-factor model of affect was not testable in all of the present P-technique datasets, the results were consistent with this interindividual model of affect. Moreover, evidence of second-order (PASS and Dysphoria) and third-order (generalized distress) factors was found in one data set. Sufficient convergence in findings between the present P-technique research and prior R-technique research suggests that the MAACL is robust in describing both inter- and intraindividual components of affect in elderly women.
Measurement Properties of the Short Multi-Dimensional Observation Scale for Elderly Subjects (MOSES)
Resumo:
This study evaluated the five-factor measurement model of the abbreviated Multidimensional Observation Scale for Elderly Subjects (MOSES), originally proposed by Pruchno, Kleban, and Resch in 1988. Modifications of the five-factor model were examined and evaluated with regard to their practical significance. A confirmatory second-order factor analysis was performed to examine whether the correlations among the first-order factors were adequately accounted for by a global dysfunction factor. Findings indicated that the proposed measurement model was replicated adequately. Although post hoc modifications resulted in significant improvements in overall model fit, the minor parameters had only a trivial influence on the major parameters of the baseline model. Results from the second-order factor analysis showed that a global dysfunc tion factor accounted adequately for the intercorrelations among the first-order factors.
Resumo:
Using a highly resolved atmospheric general circulation model, the impact of different glacial boundary conditions on precipitation and atmospheric dynamics in the North Atlantic region is investigated. Six 30-yr time slice experiments of the Last Glacial Maximum at 21 thousand years before the present (ka BP) and of a less pronounced glacial state – the Middle Weichselian (65 ka BP) – are compared to analyse the sensitivity to changes in the ice sheet distribution, in the radiative forcing and in the prescribed time-varying sea surface temperature and sea ice, which are taken from a lower-resolved, but fully coupled atmosphere-ocean general circulation model. The strongest differences are found for simulations with different heights of the Laurentide ice sheet. A high surface elevation of the Laurentide ice sheet leads to a southward displacement of the jet stream and the storm track in the North Atlantic region. These changes in the atmospheric dynamics generate a band of increased precipitation in the mid-latitudes across the Atlantic to southern Europe in winter, while the precipitation pattern in summer is only marginally affected. The impact of the radiative forcing differences between the two glacial periods and of the prescribed time-varying sea surface temperatures and sea ice are of second order importance compared to the one of the Laurentide ice sheet. They affect the atmospheric dynamics and precipitation in a similar but less pronounced manner compared with the topographic changes.
Resumo:
Focusing optical beams on a target through random propagation media is very important in many applications such as free space optical communica- tions and laser weapons. Random media effects such as beam spread and scintillation can degrade the optical system's performance severely. Compensation schemes are needed in these applications to overcome these random media effcts. In this research, we investigated the optimal beams for two different optimization criteria: one is to maximize the concentrated received intensity and the other is to minimize the scintillation index at the target plane. In the study of the optimal beam to maximize the weighted integrated intensity, we derive a similarity relationship between pupil-plane phase screen and extended Huygens-Fresnel model, and demonstrate the limited utility of maximizing the average integrated intensity. In the study ofthe optimal beam to minimize the scintillation index, we derive the first- and second-order moments for the integrated intensity of multiple coherent modes. Hermite-Gaussian and Laguerre-Gaussian modes are used as the coherent modes to synthesize an optimal partially coherent beam. The optimal beams demonstrate evident reduction of scintillation index, and prove to be insensitive to the aperture averaging effect.
Resumo:
Previous studies have either exclusively used annual tree-ring data or have combined tree-ring series with other, lower temporal resolution proxy series. Both approaches can lead to significant uncertainties, as tree-rings may underestimate the amplitude of past temperature variations, and the validity of non-annual records cannot be clearly assessed. In this study, we assembled 45 published Northern Hemisphere (NH) temperature proxy records covering the past millennium, each of which satisfied 3 essential criteria: the series must be of annual resolution, span at least a thousand years, and represent an explicit temperature signal. Suitable climate archives included ice cores, varved lake sediments, tree-rings and speleothems. We reconstructed the average annual land temperature series for the NH over the last millennium by applying 3 different reconstruction techniques: (1) principal components (PC) plus second-order autoregressive model (AR2), (2) composite plus scale (CPS) and (3) regularized errors-in-variables approach (EIV). Our reconstruction is in excellent agreement with 6 climate model simulations (including the first 5 models derived from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and an earth system model of intermediate complexity (LOVECLIM), showing similar temperatures at multi-decadal timescales; however, all simulations appear to underestimate the temperature during the Medieval Warm Period (MWP). A comparison with other NH reconstructions shows that our results are consistent with earlier studies. These results indicate that well-validated annual proxy series should be used to minimize proxy-based artifacts, and that these proxy series contain sufficient information to reconstruct the low-frequency climate variability over the past millennium.
Resumo:
The factorial validity of the SF-36 was evaluated using confirmatory factor analysis (CFA) methods, structural equation modeling (SEM), and multigroup structural equation modeling (MSEM). First, the measurement and structural model of the hypothesized SF-36 was explicated. Second, the model was tested for the validity of a second-order factorial structure, upon evidence of model misfit, determined the best-fitting model, and tested the validity of the best-fitting model on a second random sample from the same population. Third, the best-fitting model was tested for invariance of the factorial structure across race, age, and educational subgroups using MSEM.^ The findings support the second-order factorial structure of the SF-36 as proposed by Ware and Sherbourne (1992). However, the results suggest that: (a) Mental Health and Physical Health covary; (b) general mental health cross-loads onto Physical Health; (c) general health perception loads onto Mental Health instead of Physical Health; (d) many of the error terms are correlated; and (e) the physical function scale is not reliable across these two samples. This hierarchical factor pattern was replicated across both samples of health care workers, suggesting that the post hoc model fitting was not data specific. Subgroup analysis suggests that the physical function scale is not reliable across the "age" or "education" subgroups and that the general mental health scale path from Mental Health is not reliable across the "white/nonwhite" or "education" subgroups.^ The importance of this study is in the use of SEM and MSEM in evaluating sample data from the use of the SF-36. These methods are uniquely suited to the analysis of latent variable structures and are widely used in other fields. The use of latent variable models for self reported outcome measures has become widespread, and should now be applied to medical outcomes research. Invariance testing is superior to mean scores or summary scores when evaluating differences between groups. From a practical, as well as, psychometric perspective, it seems imperative that construct validity research related to the SF-36 establish whether this same hierarchical structure and invariance holds for other populations.^ This project is presented as three articles to be submitted for publication. ^
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
The present study investigated the relationship between psychometric intelligence and temporal resolution power (TRP) as simultaneously assessed by auditory and visual psychophysical timing tasks. In addition, three different theoretical models of the functional relationship between TRP and psychometric intelligence as assessed by means of the Adaptive Matrices Test (AMT) were developed. To test the validity of these models, structural equation modeling was applied. Empirical data supported a hierarchical model that assumed auditory and visual modality-specific temporal processing at a first level and amodal temporal processing at a second level. This second-order latent variable was substantially correlated with psychometric intelligence. Therefore, the relationship between psychometric intelligence and psychophysical timing performance can be explained best by a hierarchical model of temporal information processing.
Resumo:
Lake Ohrid (Macedonia, Albania) is thought to be more than 1.2 million years old and host more than 300 endemic species. As a target of the International Continental scientific Drilling Program (ICDP), a successful deep drilling campaign was carried out within the scope of the Scientific Collaboration on Past Speciation Conditions in Lake Ohrid (SCOPSCO) project in 2013. Here, we present lithological, sedimentological, and (bio-)geochemical data from the upper 247.8 m composite depth of the overall 569 m long DEEP site sediment succession from the central part of the lake. According to an age model, which is based on 11 tephra layers (first-order tie points) and on tuning of bio-geochemical proxy data to orbital parameters (second-order tie points), the analyzed sediment sequence covers the last 637 kyr. The DEEP site sediment succession consists of hemipelagic sediments, which are interspersed by several tephra layers and infrequent, thin (< 5 cm) mass wasting deposits. The hemipelagic sediments can be classified into three different lithotypes. Lithotype 1 and 2 deposits comprise calcareous and slightly calcareous silty clay and are predominantly attributed to interglacial periods with high primary productivity in the lake during summer and reduced mixing during winter. The data suggest that high ion and nutrient concentrations in the lake water promoted calcite precipitation and diatom growth in the epilimnion during MIS15, 13, and 5. Following a strong primary productivity, highest interglacial temperatures can be reported for marine isotope stages (MIS) 11 and 5, whereas MIS15, 13, 9, and 7 were comparably cooler. Lithotype 3 deposits consist of clastic, silty clayey material and predominantly represent glacial periods with low primary productivity during summer and longer and intensified mixing during winter. The data imply that the most severe glacial conditions at Lake Ohrid persisted during MIS16, 12, 10, and 6, whereas somewhat warmer temperatures can be inferred for MIS14, 8, 4, and 2. Interglacial-like conditions occurred during parts of MIS14 and 8.
Resumo:
The joint modeling of longitudinal and survival data is a new approach to many applications such as HIV, cancer vaccine trials and quality of life studies. There are recent developments of the methodologies with respect to each of the components of the joint model as well as statistical processes that link them together. Among these, second order polynomial random effect models and linear mixed effects models are the most commonly used for the longitudinal trajectory function. In this study, we first relax the parametric constraints for polynomial random effect models by using Dirichlet process priors, then three longitudinal markers rather than only one marker are considered in one joint model. Second, we use a linear mixed effect model for the longitudinal process in a joint model analyzing the three markers. In this research these methods were applied to the Primary Biliary Cirrhosis sequential data, which were collected from a clinical trial of primary biliary cirrhosis (PBC) of the liver. This trial was conducted between 1974 and 1984 at the Mayo Clinic. The effects of three longitudinal markers (1) Total Serum Bilirubin, (2) Serum Albumin and (3) Serum Glutamic-Oxaloacetic transaminase (SGOT) on patients' survival were investigated. Proportion of treatment effect will also be studied using the proposed joint modeling approaches. ^ Based on the results, we conclude that the proposed modeling approaches yield better fit to the data and give less biased parameter estimates for these trajectory functions than previous methods. Model fit is also improved after considering three longitudinal markers instead of one marker only. The results from analysis of proportion of treatment effects from these joint models indicate same conclusion as that from the final model of Fleming and Harrington (1991), which is Bilirubin and Albumin together has stronger impact in predicting patients' survival and as a surrogate endpoints for treatment. ^
Resumo:
The normal boiling point is a fundamental thermo-physical property, which is important in describing the transition between the vapor and liquid phases. Reliable method which can predict it is of great importance, especially for compounds where there are no experimental data available. In this work, an improved group contribution method, which is second order method, for determination of the normal boiling point of organic compounds based on the Joback functional first order groups with some changes and added some other functional groups was developed by using experimental data for 632 organic components. It could distinguish most of structural isomerism and stereoisomerism, which including the structural, cis- and trans- isomers of organic compounds. First and second order contributions for hydrocarbons and hydrocarbon derivatives containing carbon, hydrogen, oxygen, nitrogen, sulfur, fluorine, chlorine and bromine atoms, are given. The fminsearch mathematical approach from MATLAB software is used in this study to select an optimal collection of functional groups (65 functional groups) and subsequently to develop the model. This is a direct search method that uses the simplex search method of Lagarias et al. The results of the new method are compared to the several currently used methods and are shown to be far more accurate and reliable. The average absolute deviation of normal boiling point predictions for 632 organic compounds is 4.4350 K; and the average absolute relative deviation is 1.1047 %, which is of adequate accuracy for many practical applications.