956 resultados para COMPUTATIONAL APPROACH
Resumo:
Kalman filter is a recursive mathematical power tool that plays an increasingly vital role in innumerable fields of study. The filter has been put to service in a multitude of studies involving both time series modelling and financial time series modelling. Modelling time series data in Computational Market Dynamics (CMD) can be accomplished using the Jablonska-Capasso-Morale (JCM) model. Maximum likelihood approach has always been utilised to estimate the parameters of the JCM model. The purpose of this study is to discover if the Kalman filter can be effectively utilized in CMD. Ensemble Kalman filter (EnKF), with 50 ensemble members, applied to US sugar prices spanning the period of January, 1960 to February, 2012 was employed for this work. The real data and Kalman filter trajectories showed no significant discrepancies, hence indicating satisfactory performance of the technique. Since only US sugar prices were utilized, it would be interesting to discover the nature of results if other data sets are employed.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Resumo:
Wind power is a rapidly developing, low-emission form of energy production. In Fin-land, the official objective is to increase wind power capacity from the current 1 005 MW up to 3 500–4 000 MW by 2025. By the end of April 2015, the total capacity of all wind power project being planned in Finland had surpassed 11 000 MW. As the amount of projects in Finland is record high, an increasing amount of infrastructure is also being planned and constructed. Traditionally, these planning operations are conducted using manual and labor-intensive work methods that are prone to subjectivity. This study introduces a GIS-based methodology for determining optimal paths to sup-port the planning of onshore wind park infrastructure alignment in Nordanå-Lövböle wind park located on the island of Kemiönsaari in Southwest Finland. The presented methodology utilizes a least-cost path (LCP) algorithm for searching of optimal paths within a high resolution real-world terrain dataset derived from airborne lidar scannings. In addition, planning data is used to provide a realistic planning framework for the anal-ysis. In order to produce realistic results, the physiographic and planning datasets are standardized and weighted according to qualitative suitability assessments by utilizing methods and practices offered by multi-criteria evaluation (MCE). The results are pre-sented as scenarios to correspond various different planning objectives. Finally, the methodology is documented by using tools of Business Process Management (BPM). The results show that the presented methodology can be effectively used to search and identify extensive, 20 to 35 kilometers long networks of paths that correspond to certain optimization objectives in the study area. The utilization of high-resolution terrain data produces a more objective and more detailed path alignment plan. This study demon-strates that the presented methodology can be practically applied to support a wind power infrastructure alignment planning process. The six-phase structure of the method-ology allows straightforward incorporation of different optimization objectives. The methodology responds well to combining quantitative and qualitative data. Additional-ly, the careful documentation presents an example of how the methodology can be eval-uated and developed as a business process. This thesis also shows that more emphasis on the research of algorithm-based, more objective methods for the planning of infrastruc-ture alignment is desirable, as technological development has only recently started to realize the potential of these computational methods.
Resumo:
The Dudding group is interested in the application of Density Functional Theory (DFT) in developing asymmetric methodologies, and thus the focus of this dissertation will be on the integration of these approaches. Several interrelated subsets of computer aided design and implementation in catalysis have been addressed during the course of these studies. The first of the aims rested upon the advancement of methodologies for the synthesis of biological active C(1)-chiral 3-methylene-indan-1-ols, which in practice lead to the use of a sequential asymmetric Yamamoto-Sakurai-Hosomi allylation/Mizoroki Heck reaction sequence. An important aspect of this work was the utilization of ortho-substituted arylaldehyde reagents which are known to be a problematic class of substrates for existing asymmetric allylation approaches. The second phase of my research program lead to the further development of asymmetric allylation methods using o-arylaldehyde substrates for synthesis of chiral C(3)-substituted phthalides. Apart from the de novo design of these chemistries in silico, which notably utilized water-tolerant, inexpensive, and relatively environmental benign indium metal, this work represented the first computational study of a stereoselective indium-mediated process. Following from these discoveries was the advent of a related, yet catalytic, Ag(I)-catalyzed approach for preparing C(3)-substituted phthalides that from a practical standpoint was complementary in many ways. Not only did this new methodology build upon my earlier work with the integrated (experimental/computational) use of the Ag(I)-catalyzed asymmetric methods in synthesis, it provided fundamental insight arrived at through DFT calculations, regarding the Yamamoto-Sakurai-Hosomi allylation. The development of ligands for unprecedented asymmetric Lewis base catalysis, especially asymmetric allylations using silver and indium metals, followed as a natural extension from these earlier discoveries. To this end, forthcoming as well was the advancement of a family of disubstituted (N-cyclopropenium guanidine/N-imidazoliumyl substituted cyclopropenylimine) nitrogen adducts that has provided fundamental insight into chemical bonding and offered an unprecedented class of phase transfer catalysts (PTC) having far-reaching potential. Salient features of these disubstituted nitrogen species is unprecedented finding of a cyclopropenium based C-H•••πaryl interaction, as well, the presence of a highly dissociated anion projected them to serve as a catalyst promoting fluorination reactions. Attracted by the timely development of these disubstituted nitrogen adducts my last studies as a PhD scholar has addressed the utility of one of the synthesized disubstituted nitrogen adducts as a valuable catalyst for benzylation of the Schiff base N-diphenyl methylene glycine ethyl ester. Additionally, the catalyst was applied for benzylic fluorination, emerging from this exploration was successful fluorination of benzyl bromide and its derivatives in high yields. A notable feature of this protocol is column-free purification of the product and recovery of the catalyst to use in a further reaction sequence.
Resumo:
Wind energy has emerged as a major sustainable source of energy.The efficiency of wind power generation by wind mills has improved a lot during the last three decades.There is still further scope for maximising the conversion of wind energy into mechanical energy.In this context,the wind turbine rotor dynamics has great significance.The present work aims at a comprehensive study of the Horizontal Axis Wind Turbine (HAWT) aerodynamics by numerically solving the fluid dynamic equations with the help of a finite-volume Navier-Stokes CFD solver.As a more general goal,the study aims at providing the capabilities of modern numerical techniques for the complex fluid dynamic problems of HAWT.The main purpose is hence to maximize the physics of power extraction by wind turbines.This research demonstrates the potential of an incompressible Navier-Stokes CFD method for the aerodynamic power performance analysis of horizontal axis wind turbine.The National Renewable Energy Laboratory USA-NREL (Technical Report NREL/Cp-500-28589) had carried out an experimental work aimed at the real time performance prediction of horizontal axis wind turbine.In addition to a comparison between the results reported by NREL made and CFD simulations,comparisons are made for the local flow angle at several stations ahead of the wind turbine blades.The comparison has shown that fairly good predictions can be made for pressure distribution and torque.Subsequently, the wind-field effects on the blade aerodynamics,as well as the blade/tower interaction,were investigated.The selected case corresponded to a 12.5 m/s up-wind HAWT at zero degree of yaw angle and a rotational speed of 25 rpm.The results obtained suggest that the present can cope well with the flows encountered around wind turbines.The areodynamic performance of the turbine and the flow details near and off the turbine blades and tower can be analysed using theses results.The aerodynamic performance of airfoils differs from one another.The performance mainly depends on co-efficient of performnace,co-efficient of lift,co-efficient of drag, velocity of fluid and angle of attack.This study shows that the velocity is not constant for all angles of attack of different airfoils.The performance parameters are calculated analytically and are compared with the standardized performance tests.For different angles of ,the velocity stall is determined for the better performance of a system with respect to velocity.The research addresses the effect of surface roughness factor on the blade surface at various sections.The numerical results were found to be in agreement with the experimental data.A relative advantage of the theoretical aerofoil design method is that it allows many different concepts to be explored economically.Such efforts are generally impractical in wind tunnels because of time and money constraints.Thus, the need for a theoretical aerofoil design method is threefold:first for the design of aerofoil that fall outside the range of applicability of existing calalogs:second,for the design of aerofoil that more exactly match the requirements of the intended application:and third,for the economic exploration of many aerofoil concepts.From the results obtained for the different aerofoils,the velocity is not constant for all angles of attack.The results obtained for the aerofoil mainly depend on angle of attack and velocity.The vortex generator technique was meticulously studies with the formulation of the specification for the right angle shaped vortex generators-VG.The results were validated in accordance with the primary analysis phase.The results were found to be in good agreement with the power curve.The introduction of correct size VGs at appropriate locations over the blades of the selected HAWT was found to increase the power generation by about 4%
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
This thesis summarizes the results on the studies on a syntax based approach for translation between Malayalam, one of Dravidian languages and English and also on the development of the major modules in building a prototype machine translation system from Malayalam to English. The development of the system is a pioneering effort in Malayalam language unattempted by previous researchers. The computational models chosen for the system is first of its kind for Malayalam language. An in depth study has been carried out in the design of the computational models and data structures needed for different modules: morphological analyzer , a parser, a syntactic structure transfer module and target language sentence generator required for the prototype system. The generation of list of part of speech tags, chunk tags and the hierarchical dependencies among the chunks required for the translation process also has been done. In the development process, the major goals are: (a) accuracy of translation (b) speed and (c) space. Accuracy-wise, smart tools for handling transfer grammar and translation standards including equivalent words, expressions, phrases and styles in the target language are to be developed. The grammar should be optimized with a view to obtaining a single correct parse and hence a single translated output. Speed-wise, innovative use of corpus analysis, efficient parsing algorithm, design of efficient Data Structure and run-time frequency-based rearrangement of the grammar which substantially reduces the parsing and generation time are required. The space requirement also has to be minimised
Resumo:
Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.
Resumo:
We use a microscopic theory to describe the dynamics of the valence electrons in divalent-metal clusters. The theory is based on a many-body model Harniltonian H which takes into account, on the same electronic level, the van der Waals and the covalent bonding. In order to study the ground-state properties of H we have developed an extended slave-boson method. We have studied the bonding character and the degree of electronic delocalization in Hg_n clusters as a function of cluster size. Results show that, for increasing cluster size, an abrupt change occurs in the bond character from van der Waals to covalent bonding at a critical cluster size n_c ~ 10-20. This change also involves a transition from localized to delocalized valence electrons, as a consequence of the competition between both bonding mechanisms.
Resumo:
We address the problem of jointly determining shipment planning and scheduling decisions with the presence of multiple shipment modes. We consider long lead time, less expensive sea shipment mode, and short lead time but expensive air shipment modes. Existing research on multiple shipment modes largely address the short term scheduling decisions only. Motivated by an industrial problem where planning decisions are independent of the scheduling decisions, we investigate the benefits of integrating the two sets of decisions. We develop sequence of mathematical models to address the planning and scheduling decisions. Preliminary computational results indicate improved performance of the integrated approach over some of the existing policies used in real-life situations.
Resumo:
The use of electronic documents is constantly growing and the necessity to implement an ad-hoc eCertificate which manages access to private information is not only required but also necessary. This paper presents a protocol for the management of electronic identities (eIDs), meant as a substitute for the paper-based IDs, in a mobile environment with a user-centric approach. Mobile devices have been chosen because they provide mobility, personal use and high computational complexity. The inherent user-centricity also allows the user to personally manage the ID information and to display only what is required. The chosen path to develop the protocol is to migrate the existing eCert technologies implemented by the Learning Societies Laboratory in Southampton. By comparing this protocol with the analysis of the eID problem domain, a new solution has been derived which is compatible with both systems without loss of features.
Resumo:
In this session we look at how to think systematically about a problem and create a solution. We look at the definition and characteristics of an algorithm, and see how through modularisation and decomposition we can then choose a set of methods to create. We also compare this somewhat procedural approach, with the way that design works in Object Oriented Systems,
Resumo:
In this paper, investment cost asymmetry is introduced in order to test wheter this kind of asymmetry can account for asymmetries in business cycles. By using a smooth transition function, asymmetric investment cost is modeled and introduced in a canonical RBC model. Simulations of the model with Perturbations Method (PM) are very close to simulations through Parameterized Expectations Algorithm (PEA), which allows the use of the former for the sake of time reduction and computational costs. Both symmetric and asymmetric models were simulated and compared. Deterministic and stochastic impulse-response excersices revealed that it is possible to adequately reproduce asymmetric business cycles by modeling asymmetric investment costs. Simulations also showed that higher order moments are insu_cient to detect asymmetries. Instead, methods such as Generalized Impulse Response Analysis (GIRA) and Nonlinear Econometrics prove to be more e_cient diagnostic tools.