968 resultados para Fourth-order methods
Resumo:
Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to improve accuracy of scoring functions used in most VS methods we propose a hybrid novel approach where neural networks (NNET) and support vector machines (SVM) methods are trained with databases of known active (drugs) and inactive compounds, this information being exploited afterwards to improve VS predictions.
Resumo:
This thesis intends to analyse the performance and the efficiency of companies and to identify the key factors that may explain it. A comprehensive analysis based on a set of economic and financial ratios was studied as an instrument which provides information on enterprise performance and its efficiency. It was selected a sample with 15 enterprises: 7 Portuguese and 8 Ukrainian ones, belonging to several industries. Financial and non-financial data was collected for 6 years, during the period of 2009 to 2014. Research questions that guided this work were: Are the enterprises efficient/profitable? What factors influence enterprises’ efficiency/performance? Is there any difference between Ukrainian and Portuguese enterprises’ efficiency/performance, which factors have more influence? Which industrial sector is represented by more efficient/profitable enterprises? The main results showed that in average enterprises were efficient; comparing by states Ukrainian enterprises are more efficient; industries have similar level of efficiency. Among factors that influence ATR positively are fixed and current assets turnover ratios, ROA; negatively influencing are EBITDA margin and liquidity ratio. There is no significant difference between models by country. Concerning profitability, enterprises have low performance level but in comparison of countries Ukrainian enterprises have better profitability in average. Regarding the industry sector, paper industry is the most profitable. Among factors influencing ROA are profit margin, fixed asset turnover ratio, EBITDA margin, Debt to equity ratio and the country. In case of profitability both countries have different models. For Ukrainian enterprises is suggested to pay attention on factors of Short-term debt to total debt, ROA, Interest coverage ratio in order to be more efficient; Profit margin and EBITDA margin to make their performance better. For Portuguese enterprises for improving efficiency the observation and improvement of fixed assets turnover ratio, current assets turnover ratio, Short-term financial debt to total debt, Leverage Ratio, EBITDA margin is suggested; for improving higher profitability track fixed assets turnover ratio, current assets turnover ratio, Debt to equity ratio, Profit margin and Interest coverage ratio is suggested.
Resumo:
Scale ca. 1:875,000.
Resumo:
Since the emergence of software engineering in the late 1960's as a response to the software crisis, researchers throughout the world are trying to give theoretical support to this discipline. Several points of view have to be reviewed in order to complete this task. In the middle 70's Frederick Brooks Jr. coined the term "silver bullet" suggesting the solution to several problems rela-ted to software engineering and, hence, we adopted such a metaphor as a symbol for this book. Methods, modeling, and teaching are the insights reviewed in this book. Some work related to these topies is presented by software engineering researchers, led by Ivar Jacobson, one of the most remarkable researchers in this area. We hope our work will contribute to advance in giving the theoretieal support that software engineering needs.
Resumo:
We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.
Resumo:
Conjugated organic semiconductors have been submitted to various electrical measurement techniques in order to reveal information about shallow levels and deep traps in the forbidden gap. The materials consisted of poly[2-methoxy, 5 ethyl (2' hexyloxy) paraphenylenevinylene] (MEH-PPV), poly(3-methylthiophene) (PMeT), and alpha-sexithienyl (alpha T6) and the employed techniques were IV, CV, admittance spectroscopy, TSC, capacitance and current transients. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Dissertação de Mestrado Integrado em Medicina Veterinária
Resumo:
Conjugated organic semiconductors have been submitted to various electrical measurement techniques in order to reveal information about shallow levels and deep traps in the forbidden gap. The materials consisted of poly[2-methoxy, 5 ethyl (2' hexyloxy) paraphenylenevinylene] (MEH-PPV), poly(3-methylthiophene) (PMeT), and alpha-sexithienyl (alpha T6) and the employed techniques were IV, CV, admittance spectroscopy, TSC, capacitance and current transients. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Traditionally, densities of newly built roadways are checked by direct sampling (cores) or by nuclear density gauge measurements. For roadway engineers, density of asphalt pavement surfaces is essential to determine pavement quality. Unfortunately, field measurements of density by direct sampling or by nuclear measurement are slow processes. Therefore, I have explored the use of rapidly-deployed ground penetrating radar (GPR) as an alternative means of determining pavement quality. The dielectric constant of pavement surface may be a substructure parameter that correlates with pavement density, and can be used as a proxy when density of asphalt is not known from nuclear or destructive methods. The dielectric constant of the asphalt can be determined using ground penetrating radar (GPR). In order to use GPR for evaluation of road surface quality, the relationship between dielectric constants of asphalt and their densities must be established. Field measurements of GPR were taken at four highway sites in Houghton and Keweenaw Counties, Michigan, where density values were also obtained using nuclear methods in the field. Laboratory studies involved asphalt samples taken from the field sites and samples created in the laboratory. These were tested in various ways, including, density, thickness, and time domain reflectometry (TDR). In the field, GPR data was acquired using a 1000 MHz air-launched unit and a ground-coupled unit at 200 and 500 MHz. The equipment used was owned and operated by the Michigan Department of Transportation (MDOT) and available for this study for a total of four days during summer 2005 and spring 2006. The analysis of the reflected waveforms included “routine” processing for velocity using commercial software and direct evaluation of reflection coefficients to determine a dielectric constant. The dielectric constants computed from velocities do not agree well with those obtained from reflection coefficients. Perhaps due to the limited range of asphalt types studied, no correlation between density and dielectric constant was evident. Laboratory measurements were taken with samples removed from the field and samples created for this study. Samples from the field were studied using TDR, in order to obtain dielectric constant directly, and these correlated well with the estimates made from reflection coefficients. Samples created in the laboratory were measured using 1000 MHz air-launched GPR, and 400 MHz ground-coupled GPR, each under both wet and dry conditions. On the basis of these observations, I conclude that dielectric constant of asphalt can be reliably measured from waveform amplitude analysis of GJPR data, based on the consistent agreement with that obtained in the laboratory using TDR. Because of the uniformity of asphalts studied here, any correlation between dielectric constant and density is not yet apparent.
Resumo:
Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.
Resumo:
In this work we focus on pattern recognition methods related to EMG upper-limb prosthetic control. After giving a detailed review of the most widely used classification methods, we propose a new classification approach. It comes as a result of comparison in the Fourier analysis between able-bodied and trans-radial amputee subjects. We thus suggest a different classification method which considers each surface electrodes contribute separately, together with five time domain features, obtaining an average classification accuracy equals to 75% on a sample of trans-radial amputees. We propose an automatic feature selection procedure as a minimization problem in order to improve the method and its robustness.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
An important parameter in integrated optical device is the propagation loss of the waveguide. Its characterization gives the information of the fabrication quality as well as the information of other passive devices on the chip as it is the basic building block of the passive devices. Although, over the last three decades many methods have been developed, there is not a single standard present yet. This paper presents a comparative analysis of the methods existing from the past as well as methods developed very recently in order to provide a complete picture of the pros and cons of different types of methods and from this comparison the best method is suggested according to the authors opinion. To support the claim, apart from the analytical comparison, this paper also presents a comparison performed with the experimental results between the suggested best method which is recently proposed by Massachusetts Institute of Technology (MIT) researchers based on undercoupled all-pass microring structure and the popular cut-back method.
Resumo:
The idea of spacecraft formations, flying in tight configurations with maximum baselines of a few hundred meters in low-Earth orbits, has generated widespread interest over the last several years. Nevertheless, controlling the movement of spacecraft in formation poses difficulties, such as in-orbit high-computing demand and collision avoidance capabilities, which escalate as the number of units in the formation is increased and complicated nonlinear effects are imposed to the dynamics, together with uncertainty which may arise from the lack of knowledge of system parameters. These requirements have led to the need of reliable linear and nonlinear controllers in terms of relative and absolute dynamics. The objective of this thesis is, therefore, to introduce new control methods to allow spacecraft in formation, with circular/elliptical reference orbits, to efficiently execute safe autonomous manoeuvres. These controllers distinguish from the bulk of literature in that they merge guidance laws never applied before to spacecraft formation flying and collision avoidance capacities into a single control strategy. For this purpose, three control schemes are presented: linear optimal regulation, linear optimal estimation and adaptive nonlinear control. In general terms, the proposed control approaches command the dynamical performance of one or several followers with respect to a leader to asymptotically track a time-varying nominal trajectory (TVNT), while the threat of collision between the followers is reduced by repelling accelerations obtained from the collision avoidance scheme during the periods of closest proximity. Linear optimal regulation is achieved through a Riccati-based tracking controller. Within this control strategy, the controller provides guidance and tracking toward a desired TVNT, optimizing fuel consumption by Riccati procedure using a non-infinite cost function defined in terms of the desired TVNT, while repelling accelerations generated from the CAS will ensure evasive actions between the elements of the formation. The relative dynamics model, suitable for circular and eccentric low-Earth reference orbits, is based on the Tschauner and Hempel equations, and includes a control input and a nonlinear term corresponding to the CAS repelling accelerations. Linear optimal estimation is built on the forward-in-time separation principle. This controller encompasses two stages: regulation and estimation. The first stage requires the design of a full state feedback controller using the state vector reconstructed by means of the estimator. The second stage requires the design of an additional dynamical system, the estimator, to obtain the states which cannot be measured in order to approximately reconstruct the full state vector. Then, the separation principle states that an observer built for a known input can also be used to estimate the state of the system and to generate the control input. This allows the design of the observer and the feedback independently, by exploiting the advantages of linear quadratic regulator theory, in order to estimate the states of a dynamical system with model and sensor uncertainty. The relative dynamics is described with the linear system used in the previous controller, with a control input and nonlinearities entering via the repelling accelerations from the CAS during collision avoidance events. Moreover, sensor uncertainty is added to the control process by considering carrier-phase differential GPS (CDGPS) velocity measurement error. An adaptive control law capable of delivering superior closed-loop performance when compared to the certainty-equivalence (CE) adaptive controllers is finally presented. A novel noncertainty-equivalence controller based on the Immersion and Invariance paradigm for close-manoeuvring spacecraft formation flying in both circular and elliptical low-Earth reference orbits is introduced. The proposed control scheme achieves stabilization by immersing the plant dynamics into a target dynamical system (or manifold) that captures the desired dynamical behaviour. They key feature of this methodology is the addition of a new term to the classical certainty-equivalence control approach that, in conjunction with the parameter update law, is designed to achieve adaptive stabilization. This parameter has the ultimate task of shaping the manifold into which the adaptive system is immersed. The performance of the controller is proven stable via a Lyapunov-based analysis and Barbalat’s lemma. In order to evaluate the design of the controllers, test cases based on the physical and orbital features of the Prototype Research Instruments and Space Mission Technology Advancement (PRISMA) are implemented, extending the number of elements in the formation into scenarios with reconfigurations and on-orbit position switching in elliptical low-Earth reference orbits. An extensive analysis and comparison of the performance of the controllers in terms of total Δv and fuel consumption, with and without the effects of the CAS, is presented. These results show that the three proposed controllers allow the followers to asymptotically track the desired nominal trajectory and, additionally, those simulations including CAS show an effective decrease of collision risk during the performance of the manoeuvre.
Resumo:
This thesis proposes the development of a narrative methodology in the British Methodist Church. Such a methodology embraces and communicates both felt experience and critical theological thinking, thus producing and presenting a theology that might have a constructive transformative impact on wider society. In chapter one I explore the ways in which the Church speaks in public, identify some of the challenges it faces, and consider four models of engagement. If the Church is to engage in public discourses then I argue that its words need to be relevant and connect with people’s experiences. To ground the thinking I focus on the context of the British Methodist Church and explore how the Church engages in theological reflection through the lens of its thinking on issues of human sexuality. Chapter two reviews how theological reflection is undertaken in the British Methodist Church. I describe how the Methodist Quadrilateral of Scripture, tradition, reason and experience remains a foundational framework for theological reflection within the Methodist Church and consider the impact of institutional processes and the ways in which the Methodist people actually engage with theological thinking. The third and fourth chapters focus on how the British Methodist Church has produced its theology of human sexuality, giving particular attention to the use of personal and sexual stories in this process. I find that whilst there has been a desire to listen to the stories of the Methodist people, there has not been a corresponding interrogation or analysis of their stories so as to enable robust and constructive theological reflection on these experiences. Using resources from Foucauldian approaches to discourse analysis, I critique key statements and the processes involved in their production, offering an analysis of this body of theological thinking and indicating where possibilities for alternative ways of thinking and acting arise. The proposed methodology draws upon resources from social science methodologies, and in chapter five I look at the use of personal experience and relevant strategies of inquiry that prompt reflection on the hermeneutical process and employ narrative approaches in undertaking, analysing and presenting research. The exploration shows that qualitative research methodologies offer resources and methods of inquiry that could help the Church to engage with personal stories in its theological thinking in a robust, interrogative and imaginative way. In chapter six an examination of story and narrative is undertaken, to show how they have been understood as ways of knowing and how they relate to theological inquiry. Whilst acknowledging some of the limitations of narrative, I indicate how it offers constructive possibilities for theological reflection and could be a means for the British Methodist Church to engage in public discourse. This is explored further in chapter seven, which looks in more detail at how the British Methodist Church has used narrative in its theological thinking, and outlines areas requiring further attention in order for a narrative theological methodology to be developed, namely: attention to the question ‘whose experience?’; investigation of issues of power and the dynamics involved in the process of the production of theological thought; how personal stories and experiences are interrogated and how narrative is constructed; and how narrative might be employed within the Methodist Quadrilateral. The final chapter considers the advantages and limitations of such an approach, whether the development of such a method is possible in the Methodist Church today and its potential for helping the Church to engage in public discourse more effectively. I argue that this methodology can provoke new theological insights and enable new ways of being in the world