940 resultados para generic model
Resumo:
Some dynamical properties of a particle suffering the action of a generic drag force are obtained for a dissipative Fermi Acceleration model. The dissipation is introduced via a viscous drag force, like a gas, and is assumed to be proportional to a power of the velocity: F alpha -nu(gamma). The dynamics is described by a two-dimensional nonlinear area-contracting mapping obtained via the solution of Newton's second law of motion. We prove analytically that the decay of high energy is given by a continued fraction which recovers the following expressions: (i) linear for gamma = 1; (ii) exponential for gamma = 2; and (iii) second-degree polynomial type for gamma = 1.5. Our results are discussed for both the complete version and the simplified version. The procedure used in the present paper can be extended to many different kinds of system, including a class of billiards problems.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.
Resumo:
Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería Instituto Universitario (SIANI)
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
The aim of the present thesis was to investigate the influence of lower-limb joint models on musculoskeletal model predictions during gait. We started our analysis by using a baseline model, i.e., the state-of-the-art lower-limb model (spherical joint at the hip and hinge joints at the knee and ankle) created from MRI of a healthy subject in the Medical Technology Laboratory of the Rizzoli Orthopaedic Institute. We varied the models of knee and ankle joints, including: knee- and ankle joints with mean instantaneous axis of rotation, universal joint at the ankle, scaled-generic-derived planar knee, subject-specific planar knee model, subject-specific planar ankle model, spherical knee, spherical ankle. The joint model combinations corresponding to 10 musculoskeletal models were implemented into a typical inverse dynamics problem, including inverse kinematics, inverse dynamics, static optimization and joint reaction analysis algorithms solved using the OpenSim software to calculate joint angles, joint moments, muscle forces and activations, joint reaction forces during 5 walking trials. The predicted muscle activations were qualitatively compared to experimental EMG, to evaluate the accuracy of model predictions. Planar joint at the knee, universal joint at the ankle and spherical joints at the knee and at the ankle produced appreciable variations in model predictions during gait trials. The planar knee joint model reduced the discrepancy between the predicted activation of the Rectus Femoris and the EMG (with respect to the baseline model), and the reduced peak knee reaction force was considered more accurate. The use of the universal joint, with the introduction of the subtalar joint, worsened the muscle activation agreement with the EMG, and increased ankle and knee reaction forces were predicted. The spherical joints, in particular at the knee, worsened the muscle activation agreement with the EMG. A substantial increase of joint reaction forces at all joints was predicted despite of the good agreement in joint kinematics with those of the baseline model. The introduction of the universal joint had a negative effect on the model predictions. The cause of this discrepancy is likely to be found in the definition of the subtalar joint and thus, in the particular subject’s anthropometry, used to create the model and define the joint pose. We concluded that the implementation of complex joint models do not have marked effects on the joint reaction forces during gait. Computed results were similar in magnitude and in pattern to those reported in literature. Nonetheless, the introduction of planar joint model at the knee had positive effect upon the predictions, while the use of spherical joint at the knee and/or at the ankle is absolutely unadvisable, because it predicted unrealistic joint reaction forces.
Resumo:
Despite promising cost saving potential, many offshore software projects fail to realize the expected benefits. A frequent source of failure lies in the insufficient transfer of knowledge during the transition phase. Former literature has reported cases where some domains of knowledge were successfully transferred to vendor personnel whereas others were not. There is further evidence that the actual knowledge transfer processes often vary from case to case. This raises the question whether there is a systematic relationship between the chosen knowledge transfer process and know-ledge transfer success. This paper introduces a dynamic perspective that distinguishes different types of knowledge transfer processes explaining under which circumstances which type is deemed most appropriate to successfully transfer knowledge. Our paper draws on knowledge transfer literature, the Model of Work-Based Learning and theories from cognitive psychology to show how characteristics of know-ledge and the absorptive capacity of knowledge recipients fit particular knowledge transfer processes. The knowledge transfer processes are conceptualized as combinations of generic knowledge transfer activities. This results in six gestalts of know-ledge transfer processes, each representing a fit between the characteristics of the knowledge process and the characteristics of the knowledge to be transferred and the absorptive capacity of the knowledge recipient.
Resumo:
In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.
Resumo:
QUESTION UNDER STUDY The aim of this study was to evaluate the cost-effectiveness of ticagrelor and generic clopidogrel as add-on therapy to acetylsalicylic acid (ASA) in patients with acute coronary syndrome (ACS), from a Swiss perspective. METHODS Based on the PLATelet inhibition and patient Outcomes (PLATO) trial, one-year mean healthcare costs per patient treated with ticagrelor or generic clopidogrel were analysed from a payer perspective in 2011. A two-part decision-analytic model estimated treatment costs, quality-adjusted life years (QALYs), life years and the cost-effectiveness of ticagrelor and generic clopidogrel in patients with ACS up to a lifetime at a discount of 2.5% per annum. Sensitivity analyses were performed. RESULTS Over a patient's lifetime, treatment with ticagrelor generates an additional 0.1694 QALYs and 0.1999 life years at a cost of CHF 260 compared with generic clopidogrel. This results in an Incremental Cost Effectiveness Ratio (ICER) of CHF 1,536 per QALY and CHF 1,301 per life year gained. Ticagrelor dominated generic clopidogrel over the five-year and one-year periods with treatment generating cost savings of CHF 224 and 372 while gaining 0.0461 and 0.0051 QALYs and moreover 0.0517 and 0.0062 life years, respectively. Univariate sensitivity analyses confirmed the dominant position of ticagrelor in the first five years and probabilistic sensitivity analyses showed a high probability of cost-effectiveness over a lifetime. CONCLUSION During the first five years after ACS, treatment with ticagrelor dominates generic clopidogrel in Switzerland. Over a patient's lifetime, ticagrelor is highly cost-effective compared with generic clopidogrel, proven by ICERs significantly below commonly accepted willingness-to-pay thresholds.
Resumo:
A search for neutral Higgs bosons of the Minimal Supersymmetric Standard Model (MSSM) is reported. The analysis is based on a sample of proton-proton collisions at a centre-of-mass energy of 7 TeV recorded with the ATLAS detector at the Large Hadron Collider. The data were recorded in 2011 and correspond to an integrated luminosity of 4.7 fb(-1) to 4.8 fb(-1). Higgs boson decays into oppositely-charged in muon or tau lepton pairs are considered for final states requiring either the presence or absence of b-jets. No statistically significant excess over the expected background is observed and exclusion limits at the 95% confidence level are derived. The exclusion limits are for the production cross-section of a generic neutral Higgs boson, phi, as a function of the Higgs boson mass and for h/A/H production in the MSSM as a function of the parameters m(A) and tan beta in the m(h)(max) scenario for m(A) in the range of 90 GeV to 500 GeV.
Resumo:
We introduce a version of operational set theory, OST−, without a choice operation, which has a machinery for Δ0Δ0 separation based on truth functions and the separation operator, and a new kind of applicative set theory, so-called weak explicit set theory WEST, based on Gödel operations. We show that both the theories and Kripke–Platek set theory KPKP with infinity are pairwise Π1Π1 equivalent. We also show analogous assertions for subtheories with ∈-induction restricted in various ways and for supertheories extended by powerset, beta, limit and Mahlo operations. Whereas the upper bound is given by a refinement of inductive definition in KPKP, the lower bound is by a combination, in a specific way, of realisability, (intuitionistic) forcing and negative interpretations. Thus, despite interpretability between classical theories, we make “a detour via intuitionistic theories”. The combined interpretation, seen as a model construction in the sense of Visser's miniature model theory, is a new way of construction for classical theories and could be said the third kind of model construction ever used which is non-trivial on the logical connective level, after generic extension à la Cohen and Krivine's classical realisability model.
Resumo:
In this article we calculate the one-loop supersymmetric QCD (SQCD) corrections to the decay u˜1→cχ˜01 in the minimal supersymmetric standard model with generic flavor structure. This decay mode is phenomenologically important if the mass difference between the lightest squark u˜1 (which is assumed to be mainly stoplike) and the neutralino lightest supersymmetric particle χ˜01 is smaller than the top mass. In such a scenario u˜1→tχ˜01 is kinematically not allowed and searches for u˜1→Wbχ˜01 and u˜1→cχ˜01 are performed. A large decay rate for u˜1→cχ˜01 can weaken the LHC bounds from u˜1→Wbχ01 which are usually obtained under the assumption Br[u˜1→Wbχ01]=100%. We find the SQCD corrections enhance Γ[u˜1→cχ˜01] by approximately 10% if the flavor violation originates from bilinear terms. If flavor violation originates from trilinear terms, the effect can be ±50% or more, depending on the sign of At. We note that connecting a theory of supersymmetry breaking to LHC observables, the shift from the DR¯¯¯¯¯ to the on-shell mass is numerically very important for light stop decays.
Resumo:
Patients suffering from cystic fibrosis (CF) show thick secretions, mucus plugging and bronchiectasis in bronchial and alveolar ducts. This results in substantial structural changes of the airway morphology and heterogeneous ventilation. Disease progression and treatment effects are monitored by so-called gas washout tests, where the change in concentration of an inert gas is measured over a single or multiple breaths. The result of the tests based on the profile of the measured concentration is a marker for the severity of the ventilation inhomogeneity strongly affected by the airway morphology. However, it is hard to localize underlying obstructions to specific parts of the airways, especially if occurring in the lung periphery. In order to support the analysis of lung function tests (e.g. multi-breath washout), we developed a numerical model of the entire airway tree, coupling a lumped parameter model for the lung ventilation with a 4th-order accurate finite difference model of a 1D advection-diffusion equation for the transport of an inert gas. The boundary conditions for the flow problem comprise the pressure and flow profile at the mouth, which is typically known from clinical washout tests. The natural asymmetry of the lung morphology is approximated by a generic, fractal, asymmetric branching scheme which we applied for the conducting airways. A conducting airway ends when its dimension falls below a predefined limit. A model acinus is then connected to each terminal airway. The morphology of an acinus unit comprises a network of expandable cells. A regional, linear constitutive law describes the pressure-volume relation between the pleural gap and the acinus. The cyclic expansion (breathing) of each acinus unit depends on the resistance of the feeding airway and on the flow resistance and stiffness of the cells themselves. Special care was taken in the development of a conservative numerical scheme for the gas transport across bifurcations, handling spatially and temporally varying advective and diffusive fluxes over a wide range of scales. Implicit time integration was applied to account for the numerical stiffness resulting from the discretized transport equation. Local or regional modification of the airway dimension, resistance or tissue stiffness are introduced to mimic pathological airway restrictions typical for CF. This leads to a more heterogeneous ventilation of the model lung. As a result the concentration in some distal parts of the lung model remains increased for a longer duration. The inert gas concentration at the mouth towards the end of the expirations is composed of gas from regions with very different washout efficiency. This results in a steeper slope of the corresponding part of the washout profile.
Resumo:
This paper introduces a novel method for examining the effects of vertical integration. The basic idea is to estimate the parameters of a vertical entry game. By carefully specifying firms' payoff equations and constructing appropriate tests, it is possible to use estimates on rival profit effects to make inferences about the existence of vertical foreclosure. I estimate the vertical entry model using data from the US generic pharmaceutical industry. The estimates indicate that vertical integration is unlikely to generate anticompetitive foreclosure effects. On the other hand, significant efficiency effects are found to arise from vertical integration. I use the parameter estimates to simulate a policy that bans vertically integrated entry. The simulation results suggest that such a ban is counterproductive; it is likely to reduce entry into smaller markets.
Resumo:
In this work, a comparison between the competences codes in the CDIÓs* curriculum, the ones defined for the Tunning Project and the International Project Management Association (IPMA) is made. The goal is to define the most appropriate competences codes for the engineering education in Latin America. The CDIO code is obtained from the engineering practice, and responds to the Accreditation Board for Engineering and Technology (ABET) standards of accreditation. The Tuning competences are the ones defined for Latin America and the IPMÁs are international competences for project management. It is the first time that the competences defined in ABET accreditation standards in the engineering field are compared with the international competences according to IPMÁs model. The results give evidence that, in first place, there is a need to apply holistic models in the definition of an engineering curriculum. Second, the pertinence of these models in the definition of engineering programs in Latin America.