991 resultados para computational costs


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Models of ground source heat pump (GSHP) systems are used as an aid for the correct design and optimization of the system. For this purpose, it is necessary to develop models which correctly reproduce the dynamic thermal behavior of each component in a short-term basis. Since the borehole heat exchanger (BHE) is one of the main components, special attention should be paid to ensuring a good accuracy on the prediction of the short-term response of the boreholes. The BHE models found in literature which are suitable for short-term simulations usually present high computational costs. In this work, a novel TRNSYS type implementing a borehole-to-ground (B2G) model, developed for modeling the short-term dynamic performance of a BHE with low computational cost, is presented. The model has been validated against experimental data from a GSHP system located at Universitat Politècnica de València, Spain. Validation results show the ability of the model to reproduce the short-term behavior of the borehole, both for a step-test and under normal operating conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les séquences protéiques naturelles sont le résultat net de l’interaction entre les mécanismes de mutation, de sélection naturelle et de dérive stochastique au cours des temps évolutifs. Les modèles probabilistes d’évolution moléculaire qui tiennent compte de ces différents facteurs ont été substantiellement améliorés au cours des dernières années. En particulier, ont été proposés des modèles incorporant explicitement la structure des protéines et les interdépendances entre sites, ainsi que les outils statistiques pour évaluer la performance de ces modèles. Toutefois, en dépit des avancées significatives dans cette direction, seules des représentations très simplifiées de la structure protéique ont été utilisées jusqu’à présent. Dans ce contexte, le sujet général de cette thèse est la modélisation de la structure tridimensionnelle des protéines, en tenant compte des limitations pratiques imposées par l’utilisation de méthodes phylogénétiques très gourmandes en temps de calcul. Dans un premier temps, une méthode statistique générale est présentée, visant à optimiser les paramètres d’un potentiel statistique (qui est une pseudo-énergie mesurant la compatibilité séquence-structure). La forme fonctionnelle du potentiel est par la suite raffinée, en augmentant le niveau de détails dans la description structurale sans alourdir les coûts computationnels. Plusieurs éléments structuraux sont explorés : interactions entre pairs de résidus, accessibilité au solvant, conformation de la chaîne principale et flexibilité. Les potentiels sont ensuite inclus dans un modèle d’évolution et leur performance est évaluée en termes d’ajustement statistique à des données réelles, et contrastée avec des modèles d’évolution standards. Finalement, le nouveau modèle structurellement contraint ainsi obtenu est utilisé pour mieux comprendre les relations entre niveau d’expression des gènes et sélection et conservation de leur séquence protéique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, investment cost asymmetry is introduced in order to test wheter this kind of asymmetry can account for asymmetries in business cycles. By using a smooth transition function, asymmetric investment cost is modeled and introduced in a canonical RBC model. Simulations of the model with Perturbations Method (PM) are very close to simulations through Parameterized Expectations Algorithm (PEA), which allows the use of the former for the sake of time reduction and computational costs. Both symmetric and asymmetric models were simulated and compared. Deterministic and stochastic impulse-response excersices revealed that it is possible to adequately reproduce asymmetric business cycles by modeling asymmetric investment costs. Simulations also showed that higher order moments are insu_cient to detect asymmetries. Instead, methods such as Generalized Impulse Response Analysis (GIRA) and Nonlinear Econometrics prove to be more e_cient diagnostic tools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La present tesi, tot i que emmarcada dins de la teoria de les Mesures Semblança Molecular Quántica (MQSM), es deriva en tres àmbits clarament definits: - La creació de Contorns Moleculars de IsoDensitat Electrònica (MIDCOs, de l'anglès Molecular IsoDensity COntours) a partir de densitats electròniques ajustades. - El desenvolupament d'un mètode de sobreposició molecular, alternatiu a la regla de la màxima semblança. - Relacions Quantitatives Estructura-Activitat (QSAR, de l'anglès Quantitative Structure-Activity Relationships). L'objectiu en el camp dels MIDCOs és l'aplicació de funcions densitat ajustades, ideades inicialment per a abaratir els càlculs de MQSM, per a l'obtenció de MIDCOs. Així, es realitza un estudi gràfic comparatiu entre diferents funcions densitat ajustades a diferents bases amb densitats obtingudes de càlculs duts a terme a nivells ab initio. D'aquesta manera, l'analogia visual entre les funcions ajustades i les ab initio obtinguda en el ventall de representacions de densitat obtingudes, i juntament amb els valors de les mesures de semblança obtinguts prèviament, totalment comparables, fonamenta l'ús d'aquestes funcions ajustades. Més enllà del propòsit inicial, es van realitzar dos estudis complementaris a la simple representació de densitats, i són l'anàlisi de curvatura i l'extensió a macromolècules. La primera observació correspon a comprovar no només la semblança dels MIDCOs, sinó la coherència del seu comportament a nivell de curvatura, podent-se així observar punts d'inflexió en la representació de densitats i veure gràficament aquelles zones on la densitat és còncava o convexa. Aquest primer estudi revela que tant les densitats ajustades com les calculades a nivell ab initio es comporten de manera totalment anàloga. En la segona part d'aquest treball es va poder estendre el mètode a molècules més grans, de fins uns 2500 àtoms. Finalment, s'aplica part de la filosofia del MEDLA. Sabent que la densitat electrònica decau ràpidament al allunyar-se dels nuclis, el càlcul d'aquesta pot ser obviat a distàncies grans d'aquests. D'aquesta manera es va proposar particionar l'espai, i calcular tan sols les funcions ajustades de cada àtom tan sols en una regió petita, envoltant l'àtom en qüestió. Duent a terme aquest procés, es disminueix el temps de càlcul i el procés esdevé lineal amb nombre d'àtoms presents en la molècula tractada. En el tema dedicat a la sobreposició molecular es tracta la creació d'un algorisme, així com la seva implementació en forma de programa, batejat Topo-Geometrical Superposition Algorithm (TGSA), d'un mètode que proporcionés aquells alineaments que coincideixen amb la intuïció química. El resultat és un programa informàtic, codificat en Fortran 90, el qual alinea les molècules per parelles considerant tan sols nombres i distàncies atòmiques. La total absència de paràmetres teòrics permet desenvolupar un mètode de sobreposició molecular general, que proporcioni una sobreposició intuïtiva, i també de forma rellevant, de manera ràpida i amb poca intervenció de l'usuari. L'ús màxim del TGSA s'ha dedicat a calcular semblances per al seu ús posterior en QSAR, les quals majoritàriament no corresponen al valor que s'obtindria d'emprar la regla de la màxima semblança, sobretot si hi ha àtoms pesats en joc. Finalment, en l'últim tema, dedicat a la Semblança Quàntica en el marc del QSAR, es tracten tres aspectes diferents: - Ús de matrius de semblança. Aquí intervé l'anomenada matriu de semblança, calculada a partir de les semblances per parelles d'entre un conjunt de molècules. Aquesta matriu és emprada posteriorment, degudament tractada, com a font de descriptors moleculars per a estudis QSAR. Dins d'aquest àmbit s'han fet diversos estudis de correlació d'interès farmacològic, toxicològic, així com de diverses propietats físiques. - Aplicació de l'energia d'interacció electró-electró, assimilat com a una forma d'autosemblança. Aquesta modesta contribució consisteix breument en prendre el valor d'aquesta magnitud, i per analogia amb la notació de l'autosemblança molecular quàntica, assimilar-la com a cas particular de d'aquesta mesura. Aquesta energia d'interacció s'obté fàcilment a partir de programari mecanoquàntic, i esdevé ideal per a fer un primer estudi preliminar de correlació, on s'utilitza aquesta magnitud com a únic descriptor. - Càlcul d'autosemblances, on la densitat ha estat modificada per a augmentar el paper d'un substituent. Treballs previs amb densitats de fragments, tot i donar molt bons resultats, manquen de cert rigor conceptual en aïllar un fragment, suposadament responsable de l'activitat molecular, de la totalitat de l'estructura molecular, tot i que les densitats associades a aquest fragment ja difereixen degut a pertànyer a esquelets amb diferents substitucions. Un procediment per a omplir aquest buit que deixa la simple separació del fragment, considerant així la totalitat de la molècula (calcular-ne l'autosemblança), però evitant al mateix temps valors d'autosemblança no desitjats provocats per àtoms pesats, és l'ús de densitats de Forats de fermi, els quals es troben definits al voltant del fragment d'interès. Aquest procediment modifica la densitat de manera que es troba majoritàriament concentrada a la regió d'interès, però alhora permet obtenir una funció densitat, la qual es comporta matemàticament igual que la densitat electrònica regular, podent-se així incorporar dins del marc de la semblança molecular. Les autosemblances calculades amb aquesta metodologia han portat a bones correlacions amb àcids aromàtics substituïts, podent així donar una explicació al seu comportament. Des d'un altre punt de vista, també s'han fet contribucions conceptuals. S'ha implementat una nova mesura de semblança, la d'energia cinètica, la qual consisteix en prendre la recentment desenvolupada funció densitat d'energia cinètica, la qual al comportar-se matemàticament igual a les densitats electròniques regulars, s'ha incorporat en el marc de la semblança. A partir d'aquesta mesura s'han obtingut models QSAR satisfactoris per diferents conjunts moleculars. Dins de l'aspecte del tractament de les matrius de semblança s'ha implementat l'anomenada transformació estocàstica com a alternativa a l'ús de l'índex Carbó. Aquesta transformació de la matriu de semblança permet obtenir una nova matriu no simètrica, la qual pot ser posteriorment tractada per a construir models QSAR.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The [2+2+2] cycloaddition reaction involves the formation of three carbon-carbon bonds in one single step using alkynes, alkenes, nitriles, carbonyls and other unsaturated reagents as reactants. This is one of the most elegant methods for the construction of polycyclic aromatic compounds and heteroaromatic, which have important academic and industrial uses. The thesis is divided into ten chapters including six related publications. The first study based on the Wilkinson’s catalyst, RhCl(PPh3)3, compares the reaction mechanism of the [2+2+2] cycloaddition process of acetylene with the cycloaddition obtained for the model of the complex, RhCl(PH3)3. In an attempt to reduce computational costs in DFT studies, this research project aimed to substitute PPh3 ligands for PH3, despite the electronic and steric effects produced by PPh3 ligands being significantly different to those created by PH3 ones. In this first study, detailed theoretical calculations were performed to determine the reaction mechanism of the two complexes. Despite some differences being detected, it was found that modelling PPh3 by PH3 in the catalyst helps to reduce the computational cost significantly while at the same time providing qualitatively acceptable results. Taking into account the results obtained in this earlier study, the model of the Wilkinson’s catalyst, RhCl(PH3)3, was applied to study different [2+2+2] cycloaddition reactions with unsaturated systems conducted in the laboratory. Our research group found that in the case of totally closed systems, specifically 15- and 25-membered azamacrocycles can afford benzenic compounds, except in the case of 20-membered azamacrocycle (20-MAA) which was inactive with the Wilkinson’s catalyst. In this study, theoretical calculations allowed to determine the origin of the different reactivity of the 20-MAA, where it was found that the activation barrier of the oxidative addition of two alkynes is higher than those obtained for the 15- and 25-membered macrocycles. This barrier was attributed primarily to the interaction energy, which corresponds to the energy that is released when the two deformed reagents interact in the transition state. The main factor that helped to provide an explanation to the different reactivity observed was that the 20-MAA had a more stable and delocalized HOMO orbital in the oxidative addition step. Moreover, we observed that the formation of a strained ten-membered ring during the cycloaddition of 20-MAA presents significant steric hindrance. Furthermore, in Chapter 5, an electrochemical study is presented in collaboration with Prof. Anny Jutand from Paris. This work allowed studying the main steps of the catalytic cycle of the [2+2+2] cycloaddition reaction between diynes with a monoalkyne. First kinetic data were obtained of the [2+2+2] cycloaddition process catalyzed by the Wilkinson’s catalyst, where it was observed that the rate-determining step of the reaction can change depending on the structure of the starting reagents. In the case of the [2+2+2] cycloaddition reaction involving two alkynes and one alkene in the same molecule (enediynes), it is well known that the oxidative coupling may occur between two alkynes giving the corresponding metallacyclopentadiene, or between one alkyne and the alkene affording the metallacyclopentene complex. Wilkinson’s model was used in DFT calculations to analyze the different factors that may influence in the reaction mechanism. Here it was observed that the cyclic enediynes always prefer the oxidative coupling between two alkynes moieties, while the acyclic cases have different preferences depending on the linker and the substituents used in the alkynes. Moreover, the Wilkinson’s model was used to explain the experimental results achieved in Chapter 7 where the [2+2+2] cycloaddition reaction of enediynes is studied varying the position of the double bond in the starting reagent. It was observed that enediynes type yne-ene-yne preferred the standard [2+2+2] cycloaddition reaction, while enediynes type yne-yne-ene suffered β-hydride elimination followed a reductive elimination of Wilkinson’s catalyst giving cyclohexadiene compounds, which are isomers from those that would be obtained through standard [2+2+2] cycloaddition reactions. Finally, the last chapter of this thesis is based on the use of DFT calculations to determine the reaction mechanism when the macrocycles are treated with transition metals that are inactive to the [2+2+2] cycloaddition reaction, but which are thermally active leading to new polycyclic compounds. Thus, a domino process was described combining an ene reaction and a Diels-Alder cycloaddition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Simultaneous Localization and Mapping (SLAM) do not result in consistent maps of large areas because of gradual increase of the uncertainty for long term missions. In addition, as the size of the map grows the computational cost increases, making SLAM solutions unsuitable for on-line applications. This thesis surveys SLAM approaches paying special attention to those approaches aimed to work on large scenarios. Special focus is given to existing underwater SLAM applications. A technique based on using independent local maps together with a global stochastic map is presented. This technique is called Selective Submap Joining SLAM (SSJS). A global map contains relative transformations between local maps, which are updated once a new loop is detected. Maps sharing several features are fused, maintaining the correlation between landmarks and vehicle's pose. The use of local maps reduces computational costs and improves map consistency as compared to state of the art techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new face verification algorithm based on Gabor wavelets and AdaBoost. In the algorithm, faces are represented by Gabor wavelet features generated by Gabor wavelet transform. Gabor wavelets with 5 scales and 8 orientations are chosen to form a family of Gabor wavelets. By convolving face images with these 40 Gabor wavelets, the original images are transformed into magnitude response images of Gabor wavelet features. The AdaBoost algorithm selects a small set of significant features from the pool of the Gabor wavelet features. Each feature is the basis for a weak classifier which is trained with face images taken from the XM2VTS database. The feature with the lowest classification error is selected in each iteration of the AdaBoost operation. We also address issues regarding computational costs in feature selection with AdaBoost. A support vector machine (SVM) is trained with examples of 20 features, and the results have shown a low false positive rate and a low classification error rate in face verification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Highly heterogeneous mountain snow distributions strongly affect soil moisture patterns; local ecology; and, ultimately, the timing, magnitude, and chemistry of stream runoff. Capturing these vital heterogeneities in a physically based distributed snow model requires appropriately scaled model structures. This work looks at how model scale—particularly the resolutions at which the forcing processes are represented—affects simulated snow distributions and melt. The research area is in the Reynolds Creek Experimental Watershed in southwestern Idaho. In this region, where there is a negative correlation between snow accumulation and melt rates, overall scale degradation pushed simulated melt to earlier in the season. The processes mainly responsible for snow distribution heterogeneity in this region—wind speed, wind-affected snow accumulations, thermal radiation, and solar radiation—were also independently rescaled to test process-specific spatiotemporal sensitivities. It was found that in order to accurately simulate snowmelt in this catchment, the snow cover needed to be resolved to 100 m. Wind and wind-affected precipitation—the primary influence on snow distribution—required similar resolution. Thermal radiation scaled with the vegetation structure (~100 m), while solar radiation was adequately modeled with 100–250-m resolution. Spatiotemporal sensitivities to model scale were found that allowed for further reductions in computational costs through the winter months with limited losses in accuracy. It was also shown that these modeling-based scale breaks could be associated with physiographic and vegetation structures to aid a priori modeling decisions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The multiprocessor task graph scheduling problem has been extensively studied asacademic optimization problem which occurs in optimizing the execution time of parallelalgorithm with parallel computer. The problem is already being known as one of the NPhardproblems. There are many good approaches made with many optimizing algorithmto find out the optimum solution for this problem with less computational time. One ofthem is branch and bound algorithm.In this paper, we propose a branch and bound algorithm for the multiprocessor schedulingproblem. We investigate the algorithm by comparing two different lower bounds withtheir computational costs and the size of the pruned tree.Several experiments are made with small set of problems and results are compared indifferent sections.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two Dimensional Locality Preserving Projection (2D-LPP) is a recent extension of LPP, a popular face recognition algorithm. It has been shown that 2D-LPP performs better than PCA, 2D-PCA and LPP. However, the computational cost of 2D-LPP is high. This paper proposes a novel algorithm called Ridge Regression for Two Dimensional Locality Preserving Projection (RR- 2DLPP), which is an extension of 2D-LPP with the use of ridge regression. RR-2DLPP is comparable to 2DLPP in performance whilst having a lower computational cost. The experimental results on three benchmark face data sets - the ORL, Yale and FERET databases - demonstrate the effectiveness and efficiency of RR-2DLPP compared with other face recognition algorithms such as PCA, LPP, SR, 2D-PCA and 2D-LPP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a novel approach to improving subspace clustering by exploiting the spatial constraints. The new method encourages the sparse solution to be consistent with the spatial geometry of the tracked points, by embedding weights into the sparse formulation. By doing so, we are able to correct sparse representations in a principled manner without introducing much additional computational cost. We discuss alternative ways to treat the missing and corrupted data using the latest theory in robust lasso regression and suggest numerical algorithms so solve the proposed formulation. The experiments on the benchmark Johns Hopkins 155 dataset demonstrate that exploiting spatial constraints significantly improves motion segmentation.