938 resultados para PM3 semi-empirical method
                                
Resumo:
Regions containing internal boundaries such as composite materials arise in many applications.We consider a situation of a layered domain in IR3 containing a nite number of bounded cavities. The model is stationary heat transfer given by the Laplace equation with piecewise constant conductivity. The heat ux (a Neumann condition) is imposed on the bottom of the layered region and various boundary conditions are imposed on the cavities. The usual transmission (interface) conditions are satised at the interface layer, that is continuity of the solution and its normal derivative. To eciently calculate the stationary temperature eld in the semi-innite region, we employ a Green's matrix technique and reduce the problem to boundary integral equations (weakly singular) over the bounded surfaces of the cavities. For the numerical solution of these integral equations, we use Wienert's approach [20]. Assuming that each cavity is homeomorphic with the unit sphere, a fully discrete projection method with super-algebraic convergence order is proposed. A proof of an error estimate for the approximation is given as well. Numerical examples are presented that further highlights the eciency and accuracy of the proposed method.
                                
Resumo:
The spatial distribution of self-employment in India: evidence from semiparametric geoadditive models, Regional Studies. The entrepreneurship literature has rarely considered spatial location as a micro-determinant of occupational choice. It has also ignored self-employment in developing countries. Using Bayesian semiparametric geoadditive techniques, this paper models spatial location as a micro-determinant of self-employment choice in India. The empirical results suggest the presence of spatial occupational neighbourhoods and a clear north–south divide in self-employment when the entire sample is considered; however, spatial variation in the non-agriculture sector disappears to a large extent when individual factors that influence self-employment choice are explicitly controlled. The results further suggest non-linear effects of age, education and wealth on self-employment.
                                
Resumo:
Peptides are of great therapeutic potential as vaccines and drugs. Knowledge of physicochemical descriptors, including the partition coefficient P (commonly expressed in logarithm form: logP), is useful for screening out unsuitable molecules and also for the development of predictive Quantitative Structure-Activity Relationships (QSARs). In this paper we develop a new approach to the prediction of LogP values for peptides based on an empirical relationship between global molecular properties and measured physical properties. Our method was successful in terms of peptide prediction (total r2 = 0.641). The final model consisted of 5 physicochemical descriptors (molecular weight, number of single bonds, 2D-VDW volume, 2D-VSA hydrophobic and 2D-VSA polar). The approach is peptide specific and its predictive accuracy was high. Overall, 67% of the peptides were able to be predicted within +/-0.5 log units from the experimental values. Our method thus represents a novel prediction method with proven predictive ability.
                                
Resumo:
Over the last few years Data Envelopment Analysis (DEA) has been gaining increasing popularity as a tool for measuring efficiency and productivity of Decision Making Units (DMUs). Conventional DEA models assume non-negative inputs and outputs. However, in many real applications, some inputs and/or outputs can take negative values. Recently, Emrouznejad et al. [6] introduced a Semi-Oriented Radial Measure (SORM) for modelling DEA with negative data. This paper points out some issues in target setting with SORM models and introduces a modified SORM approach. An empirical study in bank sector demonstrates the applicability of the proposed model. © 2014 Elsevier Ltd. All rights reserved.
                                
Resumo:
Background: A natural glycoprotein usually exists as a spectrum of glycosylated forms, where each protein molecule may be associated with an array of oligosaccharide structures. The overall range of glycoforms can have a variety of different biophysical and biochemical properties, although details of structure–function relationships are poorly understood, because of the microheterogeneity of biological samples. Hence, there is clearly a need for synthetic methods that give access to natural and unnatural homogeneously glycosylated proteins. The synthesis of novel glycoproteins through the selective reaction of glycosyl iodoacetamides with the thiol groups of cysteine residues, placed by site-directed mutagenesis at desired glycosylation sites has been developed. This provides a general method for the synthesis of homogeneously glycosylated proteins that carry saccharide side chains at natural or unnatural glycosylation sites. Here, we have shown that the approach can be applied to the glycoprotein hormone erythropoietin, an important therapeutic glycoprotein with three sites of N-glycosylation that are essential for in vivo biological activity. Results: Wild-type recombinant erythropoietin and three mutants in which glycosylation site asparagine residues had been changed to cysteines (His10-WThEPO, His10-Asn24Cys, His10-Asn38Cys, His10-Asn83CyshEPO) were overexpressed and purified in yields of 13 mg l−1 from Escherichia coli. Chemical glycosylation with glycosyl-β-N-iodoacetamides could be monitored by electrospray MS. Both in the wild-type and in the mutant proteins, the potential side reaction of the other four cysteine residues (all involved in disulfide bonds) were not observed. Yield of glycosylation was generally about 50% and purification of glycosylated protein from non-glycosylated protein was readily carried out using lectin affinity chromatography. Dynamic light scattering analysis of the purified glycoproteins suggested that the glycoforms produced were monomeric and folded identically to the wild-type protein. Conclusions: Erythropoietin expressed in E. coli bearing specific Asn→Cys mutations at natural glycosylation sites can be glycosylated using β-N-glycosyl iodoacetamides even in the presence of two disulfide bonds. The findings provide the basis for further elaboration of the glycan structures and development of this general methodology for the synthesis of semi-synthetic glycoproteins. Results: Wild-type recombinant erythropoietin and three mutants in which glycosylation site asparagine residues had been changed to cysteines (His10-WThEPO, His10-Asn24Cys, His10-Asn38Cys, His10-Asn83CyshEPO) were overexpressed and purified in yields of 13 mg l−1 from Escherichia coli. Chemical glycosylation with glycosyl-β-N-iodoacetamides could be monitored by electrospray MS. Both in the wild-type and in the mutant proteins, the potential side reaction of the other four cysteine residues (all involved in disulfide bonds) were not observed. Yield of glycosylation was generally about 50% and purification of glycosylated protein from non-glycosylated protein was readily carried out using lectin affinity chromatography. Dynamic light scattering analysis of the purified glycoproteins suggested that the glycoforms produced were monomeric and folded identically to the wild-type protein. Conclusions: Erythropoietin expressed in E. coli bearing specific Asn→Cys mutations at natural glycosylation sites can be glycosylated using β-N-glycosyl iodoacetamides even in the presence of two disulfide bonds. The findings provide the basis for further elaboration of the glycan structures and development of this general methodology for the synthesis of semi-synthetic glycoproteins
                                
Resumo:
Subunit vaccine discovery is an accepted clinical priority. The empirical approach is time- and labor-consuming and can often end in failure. Rational information-driven approaches can overcome these limitations in a fast and efficient manner. However, informatics solutions require reliable algorithms for antigen identification. All known algorithms use sequence similarity to identify antigens. However, antigenicity may be encoded subtly in a sequence and may not be directly identifiable by sequence alignment. We propose a new alignment-independent method for antigen recognition based on the principal chemical properties of protein amino acid sequences. The method is tested by cross-validation on a training set of bacterial antigens and external validation on a test set of known antigens. The prediction accuracy is 83% for the cross-validation and 80% for the external test set. Our approach is accurate and robust, and provides a potent tool for the in silico discovery of medically relevant subunit vaccines.
                                
Resumo:
AMS Subj. Classification: 49J15, 49M15
                                
Resumo:
2000 Mathematics Subject Classification: 60K15, 60K20, 60G20,60J75, 60J80, 60J85, 60-08, 90B15.
                                
Resumo:
A sávosan rögzített devizaárfolyamok elméleti és gyakorlati vizsgálatai a nemzetközi közgazdaságtan egyik legnépszerűbb témaköre volt a kilencvenes évek elején. A gyakorlati módszerek közül az alkalmazások és hivatkozások száma tekintetében az úgynevezett eltolódással igazítás módszere emelkedett ki. A módszert alkalmazó szerzők szerint amíg a lebegő árfolyamú devizák előrejelzése céltalan feladatnak tűnik, addig sávos árfolyam esetén az árfolyam sávon belüli helyzetének előrejelzése sikeresen végezhető. E tanulmány bemutatja, hogy az Európai Monetáris Rendszer és az északeurópai államok sávos árfolyamrendszereinél e módszer alkalmazásával adódott eredmények például a lebegő árfolyamú amerikai dollárra és az egységgyökfolyamatok többségére is érvényesek. A tanulmány feltárja e látszólagos ellentmondás okait, és bemutat egy olyan, a sávos árfolyamrendszerek főbb megfigyelt jellemzőire épülő modellt, amelynek keretei között a sávon belüli árfolyam előrejelzése nem feltétlenül lehetséges, mert a leértékelés előtti időszakban a sávon belüli árfolyam alakulása kaotikus lehet. / === / Following the development of the first exchange rate target zone model at the end of the eighties dozens of papers analyzed theoretical and empirical topics of currency bands. This paper reviews different empirical methods to analyze the credibility of the band and lays special emphasis on the most widely used method, the so-called drift-adjustment method. Papers applying that method claim that while forecasting a freely floating currency is hopeless, predicting an exchange rate within the future band is successful. This paper shows that the results achieved by applications to EMS and Nordic currencies are not specific to data of target zone currencies. For example, application to US dollar and even to most unit root processes leads qualitatively to the same. This paper explores the solutions of this puzzle and shows a model of target zones in which the exchange rate within the band is not necessarily predictable since the process might follow chaotic dynamics before devaluation.
                                
Resumo:
Stylization is a method of ornamental plant use usually applied in urban open space and garden design based on aesthetic consideration. Stylization can be seen as a nature-imitating ornamental plant application which evokes the scenery rather than an ecological plant application which assists the processes and functions observed in the nature. From a different point of view, stylization of natural or semi-natural habitats can sometimes serve as a method for preserving the physiognomy of the plant associations that may be affected by the climate change of the 21st century. The vulnerability of the Hungarian habitats has thus far been examined by the researchers only from the botanical point of view but not in terms of its landscape design value. In Hungary coniferous forests are edaphic and classified on this basis. The General National Habitat Classification System (Á-NÉR) distinguishes calcareous Scots pine forests and acidofrequent coniferous forests. The latter seems to be highly sensitive to climate change according to ecological models. The physiognomy and species pool of its subtypes are strongly determined by the dominant coniferous species that can be Norway spruce (Picea abies) or Scots pine (Pinus sylvestris). We are going to discuss the methodology of stylization of climate sensitive habitats and briefly refer to acidofrequent coniferous forests as a case study. In the course of stylization those coniferous and deciduous tree species of the studied habitat that are water demanding should be substituted by drought tolerant ones with similar characteristics. A list of the proposed taxa is going to be given.
                                
Resumo:
The financial community is well aware that continued underfunding of state and local government pension plans poses many public policy and fiduciary management concerns. However, a well-defined theoretical rationale has not been developed to explain why and how public sector pension plans underfund. This study uses three methods: a survey of national pension experts, an incomplete covariance panel method, and field interviews.^ A survey of national public sector pension experts was conducted to provide a conceptual framework by which underfunding could be evaluated. Experts suggest that plan design, fiscal stress, and political culture factors impact underfunding. However, experts do not agree with previous research findings that unions actively pursue underfunding to secure current wage increases.^ Within the conceptual framework and determinants identified by experts, several empirical regularities are documented for the first time. Analysis of 173 local government pension plans, observed from 1987 to 1992, was conducted. Findings indicate that underfunding occurs in plans that have lower retirement ages, increased costs due to benefit enhancements, when the sponsor faces current year operating deficits, or when a local government relies heavily on inelastic revenue sources. Results also suggest that elected officials artificially inflate interest rate assumptions to reduce current pension costs, consequently shifting these costs to future generations. In concurrence with some experts there is no data to support the assumption that highly unionized employees secure more funding than less unionized employees.^ Empirical results provide satisfactory but not overwhelming statistical power, and only minor predictive capacity. To further explore why underfunding occurs, field interviews were carried out with 62 local government officials. Practitioners indicated that perceived fiscal stress, the willingness of policymakers to advance funding, bargaining strategies used by union officials, apathy by employees and retirees, pension board composition, and the level of influence by internal pension experts has an impact on funding outcomes.^ A pension funding process model was posited by triangulating the expert survey, empirical findings, and field survey results. The funding process model should help shape and refine our theoretical knowledge of state and local government pension underfunding in the future. ^
                                
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
                                
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
                                
Resumo:
There is growing popularity in the use of composite indices and rankings for cross-organizational benchmarking. However, little attention has been paid to alternative methods and procedures for the computation of these indices and how the use of such methods may impact the resulting indices and rankings. This dissertation developed an approach for assessing composite indices and rankings based on the integration of a number of methods for aggregation, data transformation and attribute weighting involved in their computation. The integrated model developed is based on the simulation of composite indices using methods and procedures proposed in the area of multi-criteria decision making (MCDM) and knowledge discovery in databases (KDD). The approach developed in this dissertation was automated through an IT artifact that was designed, developed and evaluated based on the framework and guidelines of the design science paradigm of information systems research. This artifact dynamically generates multiple versions of indices and rankings by considering different methodological scenarios according to user specified parameters. The computerized implementation was done in Visual Basic for Excel 2007. Using different performance measures, the artifact produces a number of excel outputs for the comparison and assessment of the indices and rankings. In order to evaluate the efficacy of the artifact and its underlying approach, a full empirical analysis was conducted using the World Bank's Doing Business database for the year 2010, which includes ten sub-indices (each corresponding to different areas of the business environment and regulation) for 183 countries. The output results, which were obtained using 115 methodological scenarios for the assessment of this index and its ten sub-indices, indicated that the variability of the component indicators considered in each case influenced the sensitivity of the rankings to the methodological choices. Overall, the results of our multi-method assessment were consistent with the World Bank rankings except in cases where the indices involved cost indicators measured in per capita income which yielded more sensitive results. Low income level countries exhibited more sensitivity in their rankings and less agreement between the benchmark rankings and our multi-method based rankings than higher income country groups.
                                
Resumo:
Advances in multiscale material modeling of structural concrete have created an upsurge of interest in the accurate evaluation of mechanical properties and volume fractions of its nano constituents. The task is accomplished by analyzing the response of a material to indentation, obtained as an outcome of a nanoindentation experiment, using a procedure called the Oliver and Pharr (OP) method. Despite its widespread use, the accuracy of this method is often questioned when it is applied to the data from heterogeneous materials or from the materials that show pile-up and sink-in during indentation, which necessitates the development of an alternative method. ^ In this study, a model is developed within the framework defined by contact mechanics to compute the nanomechanical properties of a material from its indentation response. Unlike the OP method, indentation energies are employed in the form of dimensionless constants to evaluate model parameters. Analysis of the load-displacement data pertaining to a wide range of materials revealed that the energy constants may be used to determine the indenter tip bluntness, hardness and initial unloading stiffness of the material. The proposed model has two main advantages: (1) it does not require the computation of the contact area, a source of error in the existing method; and (2) it incorporates the effect of peak indentation load, dwelling period and indenter tip bluntness on the measured mechanical properties explicitly. ^ Indentation tests are also carried out on samples from cement paste to validate the energy based model developed herein by determining the elastic modulus and hardness of different phases of the paste. As a consequence, it has been found that the model computes the mechanical properties in close agreement with that obtained by the OP method; a discrepancy, though insignificant, is observed more in the case of C-S-H than in the anhydrous phase. Nevertheless, the proposed method is computationally efficient, and thus it is highly suitable when the grid indentation technique is required to be performed. In addition, several empirical relations are developed that are found to be crucial in understanding the nanomechanical behavior of cementitious materials.^
 
                    